2026-03-28 01:41:10.296512 | Job console starting 2026-03-28 01:41:10.312457 | Updating git repos 2026-03-28 01:41:10.379752 | Cloning repos into workspace 2026-03-28 01:41:10.632412 | Restoring repo states 2026-03-28 01:41:10.666137 | Merging changes 2026-03-28 01:41:10.666182 | Checking out repos 2026-03-28 01:41:10.933206 | Preparing playbooks 2026-03-28 01:41:11.709483 | Running Ansible setup 2026-03-28 01:41:16.253825 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-28 01:41:17.025986 | 2026-03-28 01:41:17.026168 | PLAY [Base pre] 2026-03-28 01:41:17.043633 | 2026-03-28 01:41:17.043782 | TASK [Setup log path fact] 2026-03-28 01:41:17.084807 | orchestrator | ok 2026-03-28 01:41:17.106750 | 2026-03-28 01:41:17.106943 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 01:41:17.146101 | orchestrator | ok 2026-03-28 01:41:17.162889 | 2026-03-28 01:41:17.163024 | TASK [emit-job-header : Print job information] 2026-03-28 01:41:17.225216 | # Job Information 2026-03-28 01:41:17.225427 | Ansible Version: 2.16.14 2026-03-28 01:41:17.225466 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-28 01:41:17.225502 | Pipeline: periodic-midnight 2026-03-28 01:41:17.225526 | Executor: 521e9411259a 2026-03-28 01:41:17.225596 | Triggered by: https://github.com/osism/testbed 2026-03-28 01:41:17.225621 | Event ID: 7d11dc1fbab545418744be3ecae96668 2026-03-28 01:41:17.232834 | 2026-03-28 01:41:17.232962 | LOOP [emit-job-header : Print node information] 2026-03-28 01:41:17.383254 | orchestrator | ok: 2026-03-28 01:41:17.383626 | orchestrator | # Node Information 2026-03-28 01:41:17.383685 | orchestrator | Inventory Hostname: orchestrator 2026-03-28 01:41:17.383718 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-28 01:41:17.383749 | orchestrator | Username: zuul-testbed03 2026-03-28 01:41:17.383777 | orchestrator | Distro: Debian 12.13 2026-03-28 01:41:17.383808 | orchestrator | Provider: static-testbed 2026-03-28 01:41:17.383835 | orchestrator | Region: 2026-03-28 01:41:17.383863 | orchestrator | Label: testbed-orchestrator 2026-03-28 01:41:17.383889 | orchestrator | Product Name: OpenStack Nova 2026-03-28 01:41:17.383915 | orchestrator | Interface IP: 81.163.193.140 2026-03-28 01:41:17.407579 | 2026-03-28 01:41:17.407747 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-28 01:41:17.925936 | orchestrator -> localhost | changed 2026-03-28 01:41:17.948387 | 2026-03-28 01:41:17.948689 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-28 01:41:19.097411 | orchestrator -> localhost | changed 2026-03-28 01:41:19.119851 | 2026-03-28 01:41:19.119995 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-28 01:41:19.439435 | orchestrator -> localhost | ok 2026-03-28 01:41:19.446827 | 2026-03-28 01:41:19.446993 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-28 01:41:19.476585 | orchestrator | ok 2026-03-28 01:41:19.493096 | orchestrator | included: /var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-28 01:41:19.501328 | 2026-03-28 01:41:19.501444 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-28 01:41:20.927621 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-28 01:41:20.927980 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/8732b28726ea4e9386aa58ce2948e02e_id_rsa 2026-03-28 01:41:20.928059 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/8732b28726ea4e9386aa58ce2948e02e_id_rsa.pub 2026-03-28 01:41:20.928113 | orchestrator -> localhost | The key fingerprint is: 2026-03-28 01:41:20.928161 | orchestrator -> localhost | SHA256:z5dNXs6YGpu4Th1GM5d14CYra58dqACdc/oRJvQ9G7c zuul-build-sshkey 2026-03-28 01:41:20.928204 | orchestrator -> localhost | The key's randomart image is: 2026-03-28 01:41:20.928264 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-28 01:41:20.928308 | orchestrator -> localhost | | ..o| 2026-03-28 01:41:20.928349 | orchestrator -> localhost | | . o.| 2026-03-28 01:41:20.928389 | orchestrator -> localhost | | . = = | 2026-03-28 01:41:20.928428 | orchestrator -> localhost | | o o o B | 2026-03-28 01:41:20.928467 | orchestrator -> localhost | | . S * B o .| 2026-03-28 01:41:20.928520 | orchestrator -> localhost | | . O * % B | 2026-03-28 01:41:20.928580 | orchestrator -> localhost | | o B B E o| 2026-03-28 01:41:20.928620 | orchestrator -> localhost | | = * B . | 2026-03-28 01:41:20.928660 | orchestrator -> localhost | | .*.* . | 2026-03-28 01:41:20.928700 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-28 01:41:20.928801 | orchestrator -> localhost | ok: Runtime: 0:00:00.913092 2026-03-28 01:41:20.942082 | 2026-03-28 01:41:20.942230 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-28 01:41:20.980269 | orchestrator | ok 2026-03-28 01:41:20.995702 | orchestrator | included: /var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-28 01:41:21.005447 | 2026-03-28 01:41:21.005568 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-28 01:41:21.030707 | orchestrator | skipping: Conditional result was False 2026-03-28 01:41:21.039171 | 2026-03-28 01:41:21.039287 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-28 01:41:21.669757 | orchestrator | changed 2026-03-28 01:41:21.678907 | 2026-03-28 01:41:21.679049 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-28 01:41:21.982341 | orchestrator | ok 2026-03-28 01:41:21.992004 | 2026-03-28 01:41:21.992168 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-28 01:41:22.658695 | orchestrator | ok 2026-03-28 01:41:22.668508 | 2026-03-28 01:41:22.668671 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-28 01:41:23.134509 | orchestrator | ok 2026-03-28 01:41:23.144454 | 2026-03-28 01:41:23.144675 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-28 01:41:23.170338 | orchestrator | skipping: Conditional result was False 2026-03-28 01:41:23.185909 | 2026-03-28 01:41:23.186222 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-28 01:41:23.648624 | orchestrator -> localhost | changed 2026-03-28 01:41:23.662901 | 2026-03-28 01:41:23.663051 | TASK [add-build-sshkey : Add back temp key] 2026-03-28 01:41:24.033053 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/8732b28726ea4e9386aa58ce2948e02e_id_rsa (zuul-build-sshkey) 2026-03-28 01:41:24.033639 | orchestrator -> localhost | ok: Runtime: 0:00:00.020780 2026-03-28 01:41:24.048035 | 2026-03-28 01:41:24.048191 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-28 01:41:24.494328 | orchestrator | ok 2026-03-28 01:41:24.504095 | 2026-03-28 01:41:24.504247 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-28 01:41:24.539413 | orchestrator | skipping: Conditional result was False 2026-03-28 01:41:24.599961 | 2026-03-28 01:41:24.600104 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-28 01:41:25.032282 | orchestrator | ok 2026-03-28 01:41:25.046511 | 2026-03-28 01:41:25.046693 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-28 01:41:25.087035 | orchestrator | ok 2026-03-28 01:41:25.100729 | 2026-03-28 01:41:25.100888 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-28 01:41:25.439868 | orchestrator -> localhost | ok 2026-03-28 01:41:25.448029 | 2026-03-28 01:41:25.448189 | TASK [validate-host : Collect information about the host] 2026-03-28 01:41:26.706698 | orchestrator | ok 2026-03-28 01:41:26.724048 | 2026-03-28 01:41:26.724193 | TASK [validate-host : Sanitize hostname] 2026-03-28 01:41:26.798733 | orchestrator | ok 2026-03-28 01:41:26.807243 | 2026-03-28 01:41:26.807371 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-28 01:41:27.412099 | orchestrator -> localhost | changed 2026-03-28 01:41:27.426901 | 2026-03-28 01:41:27.427098 | TASK [validate-host : Collect information about zuul worker] 2026-03-28 01:41:27.914278 | orchestrator | ok 2026-03-28 01:41:27.922386 | 2026-03-28 01:41:27.922677 | TASK [validate-host : Write out all zuul information for each host] 2026-03-28 01:41:28.473966 | orchestrator -> localhost | changed 2026-03-28 01:41:28.484977 | 2026-03-28 01:41:28.485103 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-28 01:41:28.788496 | orchestrator | ok 2026-03-28 01:41:28.798924 | 2026-03-28 01:41:28.799058 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-28 01:41:46.045431 | orchestrator | changed: 2026-03-28 01:41:46.045801 | orchestrator | .d..t...... src/ 2026-03-28 01:41:46.045849 | orchestrator | .d..t...... src/github.com/ 2026-03-28 01:41:46.045882 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-28 01:41:46.045911 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-28 01:41:46.045937 | orchestrator | RedHat.yml 2026-03-28 01:41:46.062654 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-28 01:41:46.062672 | orchestrator | RedHat.yml 2026-03-28 01:41:46.062787 | orchestrator | = 2.2.0"... 2026-03-28 01:41:56.172469 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-28 01:41:56.191880 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-28 01:41:56.663959 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-28 01:41:57.577468 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 01:41:57.647229 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-28 01:41:58.724676 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 01:41:59.101115 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-28 01:42:00.253384 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-28 01:42:00.253472 | orchestrator | 2026-03-28 01:42:00.253481 | orchestrator | Providers are signed by their developers. 2026-03-28 01:42:00.253486 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-28 01:42:00.253492 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-28 01:42:00.253499 | orchestrator | 2026-03-28 01:42:00.253503 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-28 01:42:00.253519 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-28 01:42:00.253524 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-28 01:42:00.253528 | orchestrator | you run "tofu init" in the future. 2026-03-28 01:42:00.253827 | orchestrator | 2026-03-28 01:42:00.253852 | orchestrator | OpenTofu has been successfully initialized! 2026-03-28 01:42:00.253860 | orchestrator | 2026-03-28 01:42:00.253864 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-28 01:42:00.253869 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-28 01:42:00.253873 | orchestrator | should now work. 2026-03-28 01:42:00.253877 | orchestrator | 2026-03-28 01:42:00.253881 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-28 01:42:00.253885 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-28 01:42:00.253971 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-28 01:42:00.425947 | orchestrator | Created and switched to workspace "ci"! 2026-03-28 01:42:00.426103 | orchestrator | 2026-03-28 01:42:00.426124 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-28 01:42:00.426197 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-28 01:42:00.426212 | orchestrator | for this configuration. 2026-03-28 01:42:00.562893 | orchestrator | ci.auto.tfvars 2026-03-28 01:42:01.337725 | orchestrator | default_custom.tf 2026-03-28 01:42:03.152748 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-28 01:42:03.730397 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-28 01:42:03.981282 | orchestrator | 2026-03-28 01:42:03.981398 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-28 01:42:03.981406 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-28 01:42:03.981411 | orchestrator | + create 2026-03-28 01:42:03.981416 | orchestrator | <= read (data resources) 2026-03-28 01:42:03.981429 | orchestrator | 2026-03-28 01:42:03.981433 | orchestrator | OpenTofu will perform the following actions: 2026-03-28 01:42:03.981437 | orchestrator | 2026-03-28 01:42:03.981442 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-28 01:42:03.981446 | orchestrator | # (config refers to values not yet known) 2026-03-28 01:42:03.981451 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-28 01:42:03.981455 | orchestrator | + checksum = (known after apply) 2026-03-28 01:42:03.981459 | orchestrator | + created_at = (known after apply) 2026-03-28 01:42:03.981463 | orchestrator | + file = (known after apply) 2026-03-28 01:42:03.981467 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981491 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.981495 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 01:42:03.981500 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 01:42:03.981503 | orchestrator | + most_recent = true 2026-03-28 01:42:03.981507 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.981511 | orchestrator | + protected = (known after apply) 2026-03-28 01:42:03.981515 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.981521 | orchestrator | + schema = (known after apply) 2026-03-28 01:42:03.981525 | orchestrator | + size_bytes = (known after apply) 2026-03-28 01:42:03.981529 | orchestrator | + tags = (known after apply) 2026-03-28 01:42:03.981533 | orchestrator | + updated_at = (known after apply) 2026-03-28 01:42:03.981537 | orchestrator | } 2026-03-28 01:42:03.981541 | orchestrator | 2026-03-28 01:42:03.981545 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-28 01:42:03.981549 | orchestrator | # (config refers to values not yet known) 2026-03-28 01:42:03.981553 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-28 01:42:03.981557 | orchestrator | + checksum = (known after apply) 2026-03-28 01:42:03.981561 | orchestrator | + created_at = (known after apply) 2026-03-28 01:42:03.981564 | orchestrator | + file = (known after apply) 2026-03-28 01:42:03.981568 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981572 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.981576 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 01:42:03.981580 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 01:42:03.981584 | orchestrator | + most_recent = true 2026-03-28 01:42:03.981588 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.981592 | orchestrator | + protected = (known after apply) 2026-03-28 01:42:03.981595 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.981599 | orchestrator | + schema = (known after apply) 2026-03-28 01:42:03.981603 | orchestrator | + size_bytes = (known after apply) 2026-03-28 01:42:03.981607 | orchestrator | + tags = (known after apply) 2026-03-28 01:42:03.981610 | orchestrator | + updated_at = (known after apply) 2026-03-28 01:42:03.981614 | orchestrator | } 2026-03-28 01:42:03.981620 | orchestrator | 2026-03-28 01:42:03.981624 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-28 01:42:03.981628 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-28 01:42:03.981632 | orchestrator | + content = (known after apply) 2026-03-28 01:42:03.981637 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 01:42:03.981640 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 01:42:03.981644 | orchestrator | + content_md5 = (known after apply) 2026-03-28 01:42:03.981648 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 01:42:03.981652 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 01:42:03.981655 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 01:42:03.981659 | orchestrator | + directory_permission = "0777" 2026-03-28 01:42:03.981663 | orchestrator | + file_permission = "0644" 2026-03-28 01:42:03.981667 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-28 01:42:03.981671 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981674 | orchestrator | } 2026-03-28 01:42:03.981678 | orchestrator | 2026-03-28 01:42:03.981682 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-28 01:42:03.981686 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-28 01:42:03.981690 | orchestrator | + content = (known after apply) 2026-03-28 01:42:03.981694 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 01:42:03.981697 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 01:42:03.981701 | orchestrator | + content_md5 = (known after apply) 2026-03-28 01:42:03.981705 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 01:42:03.981709 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 01:42:03.981720 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 01:42:03.981724 | orchestrator | + directory_permission = "0777" 2026-03-28 01:42:03.981728 | orchestrator | + file_permission = "0644" 2026-03-28 01:42:03.981736 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-28 01:42:03.981740 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981744 | orchestrator | } 2026-03-28 01:42:03.981748 | orchestrator | 2026-03-28 01:42:03.981751 | orchestrator | # local_file.inventory will be created 2026-03-28 01:42:03.981755 | orchestrator | + resource "local_file" "inventory" { 2026-03-28 01:42:03.981759 | orchestrator | + content = (known after apply) 2026-03-28 01:42:03.981763 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 01:42:03.981767 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 01:42:03.981770 | orchestrator | + content_md5 = (known after apply) 2026-03-28 01:42:03.981774 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 01:42:03.981778 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 01:42:03.981782 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 01:42:03.981786 | orchestrator | + directory_permission = "0777" 2026-03-28 01:42:03.981790 | orchestrator | + file_permission = "0644" 2026-03-28 01:42:03.981793 | orchestrator | + filename = "inventory.ci" 2026-03-28 01:42:03.981797 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981801 | orchestrator | } 2026-03-28 01:42:03.981807 | orchestrator | 2026-03-28 01:42:03.981811 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-28 01:42:03.981814 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-28 01:42:03.981818 | orchestrator | + content = (sensitive value) 2026-03-28 01:42:03.981822 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 01:42:03.981826 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 01:42:03.981829 | orchestrator | + content_md5 = (known after apply) 2026-03-28 01:42:03.981833 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 01:42:03.981837 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 01:42:03.981841 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 01:42:03.981844 | orchestrator | + directory_permission = "0700" 2026-03-28 01:42:03.981848 | orchestrator | + file_permission = "0600" 2026-03-28 01:42:03.981852 | orchestrator | + filename = ".id_rsa.ci" 2026-03-28 01:42:03.981856 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981860 | orchestrator | } 2026-03-28 01:42:03.981864 | orchestrator | 2026-03-28 01:42:03.981867 | orchestrator | # null_resource.node_semaphore will be created 2026-03-28 01:42:03.981871 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-28 01:42:03.981875 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981879 | orchestrator | } 2026-03-28 01:42:03.981883 | orchestrator | 2026-03-28 01:42:03.981886 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-28 01:42:03.981890 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-28 01:42:03.981894 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.981898 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.981902 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981906 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.981909 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.981913 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-28 01:42:03.981917 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.981921 | orchestrator | + size = 80 2026-03-28 01:42:03.981925 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.981929 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.981932 | orchestrator | } 2026-03-28 01:42:03.981938 | orchestrator | 2026-03-28 01:42:03.981942 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-28 01:42:03.981945 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.981949 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.981953 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.981957 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.981963 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.981967 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.981971 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-28 01:42:03.981975 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.981979 | orchestrator | + size = 80 2026-03-28 01:42:03.981982 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.981986 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.981990 | orchestrator | } 2026-03-28 01:42:03.981995 | orchestrator | 2026-03-28 01:42:03.981999 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-28 01:42:03.982003 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.982007 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.982029 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.982034 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.982038 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.982042 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.982046 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-28 01:42:03.982050 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.982053 | orchestrator | + size = 80 2026-03-28 01:42:03.982057 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.982061 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.982065 | orchestrator | } 2026-03-28 01:42:03.982664 | orchestrator | 2026-03-28 01:42:03.982683 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-28 01:42:03.982687 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.982691 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.982695 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.982699 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.982703 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.982707 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.982711 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-28 01:42:03.982714 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.982718 | orchestrator | + size = 80 2026-03-28 01:42:03.982728 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.982732 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.982736 | orchestrator | } 2026-03-28 01:42:03.982872 | orchestrator | 2026-03-28 01:42:03.982878 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-28 01:42:03.982882 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.982886 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.982890 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.982894 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.982898 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.982902 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.982906 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-28 01:42:03.982910 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.982913 | orchestrator | + size = 80 2026-03-28 01:42:03.982917 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.982921 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.982925 | orchestrator | } 2026-03-28 01:42:03.983093 | orchestrator | 2026-03-28 01:42:03.983101 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-28 01:42:03.983105 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.983109 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.983113 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.983116 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.983127 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.983131 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.983135 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-28 01:42:03.983139 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.983142 | orchestrator | + size = 80 2026-03-28 01:42:03.983146 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.983162 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.983166 | orchestrator | } 2026-03-28 01:42:03.983524 | orchestrator | 2026-03-28 01:42:03.983530 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-28 01:42:03.983534 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 01:42:03.983538 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.983542 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.983546 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.983550 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.983553 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.983557 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-28 01:42:03.983561 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.983565 | orchestrator | + size = 80 2026-03-28 01:42:03.983569 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.983572 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.983576 | orchestrator | } 2026-03-28 01:42:03.983700 | orchestrator | 2026-03-28 01:42:03.983706 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-28 01:42:03.983711 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.983715 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.983719 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.983723 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.983727 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.983731 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-28 01:42:03.983735 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.983739 | orchestrator | + size = 20 2026-03-28 01:42:03.983742 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.983746 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.983750 | orchestrator | } 2026-03-28 01:42:03.983868 | orchestrator | 2026-03-28 01:42:03.983874 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-28 01:42:03.983878 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.983881 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.983885 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.983889 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.983893 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.983897 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-28 01:42:03.983901 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.983904 | orchestrator | + size = 20 2026-03-28 01:42:03.983908 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.983912 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.983916 | orchestrator | } 2026-03-28 01:42:03.984063 | orchestrator | 2026-03-28 01:42:03.984071 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-28 01:42:03.984075 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.984079 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.984083 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.984086 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.984090 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.984094 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-28 01:42:03.984098 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.984109 | orchestrator | + size = 20 2026-03-28 01:42:03.984113 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.984116 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.984120 | orchestrator | } 2026-03-28 01:42:03.984274 | orchestrator | 2026-03-28 01:42:03.984280 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-28 01:42:03.984284 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.984288 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.984292 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.984296 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.984304 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.984308 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-28 01:42:03.984312 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.984316 | orchestrator | + size = 20 2026-03-28 01:42:03.984320 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.984324 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.984328 | orchestrator | } 2026-03-28 01:42:03.984594 | orchestrator | 2026-03-28 01:42:03.984600 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-28 01:42:03.984604 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.984608 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.984612 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.984615 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.984619 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.984623 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-28 01:42:03.984627 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.984631 | orchestrator | + size = 20 2026-03-28 01:42:03.984634 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.984638 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.984642 | orchestrator | } 2026-03-28 01:42:03.984776 | orchestrator | 2026-03-28 01:42:03.984782 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-28 01:42:03.984786 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.984790 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.984793 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.984797 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.984801 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.984805 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-28 01:42:03.984809 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.984812 | orchestrator | + size = 20 2026-03-28 01:42:03.984816 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.984820 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.984824 | orchestrator | } 2026-03-28 01:42:03.984929 | orchestrator | 2026-03-28 01:42:03.984935 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-28 01:42:03.984939 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.984942 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.984946 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.984950 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.984954 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.984958 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-28 01:42:03.984961 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.984965 | orchestrator | + size = 20 2026-03-28 01:42:03.984969 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.984973 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.984977 | orchestrator | } 2026-03-28 01:42:03.985101 | orchestrator | 2026-03-28 01:42:03.985106 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-28 01:42:03.985110 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.985119 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.985123 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.985127 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.985131 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.985134 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-28 01:42:03.985138 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.985142 | orchestrator | + size = 20 2026-03-28 01:42:03.985146 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.985164 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.985168 | orchestrator | } 2026-03-28 01:42:03.985328 | orchestrator | 2026-03-28 01:42:03.985334 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-28 01:42:03.985338 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 01:42:03.985342 | orchestrator | + attachment = (known after apply) 2026-03-28 01:42:03.985346 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.985350 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.985354 | orchestrator | + metadata = (known after apply) 2026-03-28 01:42:03.985357 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-28 01:42:03.985361 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.985365 | orchestrator | + size = 20 2026-03-28 01:42:03.985369 | orchestrator | + volume_retype_policy = "never" 2026-03-28 01:42:03.985373 | orchestrator | + volume_type = "ssd" 2026-03-28 01:42:03.985377 | orchestrator | } 2026-03-28 01:42:03.985853 | orchestrator | 2026-03-28 01:42:03.985859 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-28 01:42:03.985863 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-28 01:42:03.985866 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.985870 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.985874 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.985878 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.985882 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.985886 | orchestrator | + config_drive = true 2026-03-28 01:42:03.985893 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.985897 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.985901 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-28 01:42:03.985905 | orchestrator | + force_delete = false 2026-03-28 01:42:03.985909 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.985913 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.985916 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.985920 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.985924 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.985928 | orchestrator | + name = "testbed-manager" 2026-03-28 01:42:03.985932 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.985935 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.985939 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.985943 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.985947 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.985951 | orchestrator | + user_data = (sensitive value) 2026-03-28 01:42:03.985955 | orchestrator | 2026-03-28 01:42:03.985959 | orchestrator | + block_device { 2026-03-28 01:42:03.985963 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.985967 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.985971 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.985975 | orchestrator | + multiattach = false 2026-03-28 01:42:03.985978 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.985982 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.985990 | orchestrator | } 2026-03-28 01:42:03.985994 | orchestrator | 2026-03-28 01:42:03.985998 | orchestrator | + network { 2026-03-28 01:42:03.986002 | orchestrator | + access_network = false 2026-03-28 01:42:03.986006 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.986010 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.986029 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.986033 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.986037 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.986041 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.986044 | orchestrator | } 2026-03-28 01:42:03.986048 | orchestrator | } 2026-03-28 01:42:03.986509 | orchestrator | 2026-03-28 01:42:03.986516 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-28 01:42:03.986520 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.986524 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.986527 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.986531 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.986535 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.986539 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.986543 | orchestrator | + config_drive = true 2026-03-28 01:42:03.986546 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.986550 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.986554 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.986558 | orchestrator | + force_delete = false 2026-03-28 01:42:03.986562 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.986566 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.986570 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.986573 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.986577 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.986581 | orchestrator | + name = "testbed-node-0" 2026-03-28 01:42:03.986585 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.986588 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.986592 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.986596 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.986600 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.986604 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.986607 | orchestrator | 2026-03-28 01:42:03.986611 | orchestrator | + block_device { 2026-03-28 01:42:03.986615 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.986619 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.986623 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.986626 | orchestrator | + multiattach = false 2026-03-28 01:42:03.986630 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.986634 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.986638 | orchestrator | } 2026-03-28 01:42:03.986642 | orchestrator | 2026-03-28 01:42:03.986646 | orchestrator | + network { 2026-03-28 01:42:03.986649 | orchestrator | + access_network = false 2026-03-28 01:42:03.986653 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.986657 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.986661 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.986665 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.986668 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.986672 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.986676 | orchestrator | } 2026-03-28 01:42:03.986680 | orchestrator | } 2026-03-28 01:42:03.987074 | orchestrator | 2026-03-28 01:42:03.987080 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-28 01:42:03.987084 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.987088 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.987096 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.987100 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.987104 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.987108 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.987111 | orchestrator | + config_drive = true 2026-03-28 01:42:03.987115 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.987119 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.987123 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.987126 | orchestrator | + force_delete = false 2026-03-28 01:42:03.987130 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.987134 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.987138 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.987141 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.987145 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.987163 | orchestrator | + name = "testbed-node-1" 2026-03-28 01:42:03.987167 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.987171 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.987174 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.987178 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.987182 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.987189 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.987193 | orchestrator | 2026-03-28 01:42:03.987197 | orchestrator | + block_device { 2026-03-28 01:42:03.987201 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.987205 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.987209 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.987213 | orchestrator | + multiattach = false 2026-03-28 01:42:03.987216 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.987220 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.987224 | orchestrator | } 2026-03-28 01:42:03.987228 | orchestrator | 2026-03-28 01:42:03.987232 | orchestrator | + network { 2026-03-28 01:42:03.987235 | orchestrator | + access_network = false 2026-03-28 01:42:03.987239 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.987243 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.987247 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.987251 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.987254 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.987258 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.987262 | orchestrator | } 2026-03-28 01:42:03.987266 | orchestrator | } 2026-03-28 01:42:03.987704 | orchestrator | 2026-03-28 01:42:03.987710 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-28 01:42:03.987714 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.987718 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.987722 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.987726 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.987730 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.987734 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.987737 | orchestrator | + config_drive = true 2026-03-28 01:42:03.987741 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.987745 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.987749 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.987753 | orchestrator | + force_delete = false 2026-03-28 01:42:03.987756 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.987760 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.987764 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.987772 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.987776 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.987780 | orchestrator | + name = "testbed-node-2" 2026-03-28 01:42:03.987784 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.987788 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.987791 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.987795 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.987799 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.987803 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.987806 | orchestrator | 2026-03-28 01:42:03.987810 | orchestrator | + block_device { 2026-03-28 01:42:03.987814 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.987818 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.987822 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.987825 | orchestrator | + multiattach = false 2026-03-28 01:42:03.987829 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.987833 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.987837 | orchestrator | } 2026-03-28 01:42:03.987841 | orchestrator | 2026-03-28 01:42:03.987845 | orchestrator | + network { 2026-03-28 01:42:03.987848 | orchestrator | + access_network = false 2026-03-28 01:42:03.987852 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.987856 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.987860 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.987863 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.987867 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.987871 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.987875 | orchestrator | } 2026-03-28 01:42:03.987878 | orchestrator | } 2026-03-28 01:42:03.988412 | orchestrator | 2026-03-28 01:42:03.988425 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-28 01:42:03.988429 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.988432 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.988436 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.988440 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.988444 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.988448 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.988451 | orchestrator | + config_drive = true 2026-03-28 01:42:03.988455 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.988459 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.988463 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.988466 | orchestrator | + force_delete = false 2026-03-28 01:42:03.988470 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.988474 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.988478 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.988481 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.988485 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.988489 | orchestrator | + name = "testbed-node-3" 2026-03-28 01:42:03.988493 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.988497 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.988500 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.988504 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.988508 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.988512 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.988515 | orchestrator | 2026-03-28 01:42:03.988519 | orchestrator | + block_device { 2026-03-28 01:42:03.988523 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.988527 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.988531 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.988538 | orchestrator | + multiattach = false 2026-03-28 01:42:03.988542 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.988546 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.988550 | orchestrator | } 2026-03-28 01:42:03.988553 | orchestrator | 2026-03-28 01:42:03.988557 | orchestrator | + network { 2026-03-28 01:42:03.988561 | orchestrator | + access_network = false 2026-03-28 01:42:03.988565 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.988568 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.988572 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.988576 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.988580 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.988583 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.988587 | orchestrator | } 2026-03-28 01:42:03.988591 | orchestrator | } 2026-03-28 01:42:03.988809 | orchestrator | 2026-03-28 01:42:03.988815 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-28 01:42:03.988819 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.988823 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.988827 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.988831 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.988835 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.988838 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.988842 | orchestrator | + config_drive = true 2026-03-28 01:42:03.988846 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.988850 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.988853 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.988857 | orchestrator | + force_delete = false 2026-03-28 01:42:03.988861 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.988865 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.988869 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.988872 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.988876 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.988880 | orchestrator | + name = "testbed-node-4" 2026-03-28 01:42:03.988884 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.988888 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.988891 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.988895 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.988899 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.988903 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.988907 | orchestrator | 2026-03-28 01:42:03.988910 | orchestrator | + block_device { 2026-03-28 01:42:03.988914 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.988918 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.988922 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.988926 | orchestrator | + multiattach = false 2026-03-28 01:42:03.988929 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.988933 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.988937 | orchestrator | } 2026-03-28 01:42:03.988941 | orchestrator | 2026-03-28 01:42:03.988945 | orchestrator | + network { 2026-03-28 01:42:03.988949 | orchestrator | + access_network = false 2026-03-28 01:42:03.988952 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.988956 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.988960 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.988964 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.988968 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.988971 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.988975 | orchestrator | } 2026-03-28 01:42:03.988979 | orchestrator | } 2026-03-28 01:42:03.989444 | orchestrator | 2026-03-28 01:42:03.989476 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-28 01:42:03.989485 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 01:42:03.989493 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 01:42:03.989500 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 01:42:03.989506 | orchestrator | + all_metadata = (known after apply) 2026-03-28 01:42:03.989512 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.989519 | orchestrator | + availability_zone = "nova" 2026-03-28 01:42:03.989523 | orchestrator | + config_drive = true 2026-03-28 01:42:03.989528 | orchestrator | + created = (known after apply) 2026-03-28 01:42:03.989532 | orchestrator | + flavor_id = (known after apply) 2026-03-28 01:42:03.989536 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 01:42:03.989540 | orchestrator | + force_delete = false 2026-03-28 01:42:03.989544 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 01:42:03.989548 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989552 | orchestrator | + image_id = (known after apply) 2026-03-28 01:42:03.989556 | orchestrator | + image_name = (known after apply) 2026-03-28 01:42:03.989560 | orchestrator | + key_pair = "testbed" 2026-03-28 01:42:03.989563 | orchestrator | + name = "testbed-node-5" 2026-03-28 01:42:03.989567 | orchestrator | + power_state = "active" 2026-03-28 01:42:03.989571 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989575 | orchestrator | + security_groups = (known after apply) 2026-03-28 01:42:03.989579 | orchestrator | + stop_before_destroy = false 2026-03-28 01:42:03.989583 | orchestrator | + updated = (known after apply) 2026-03-28 01:42:03.989587 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 01:42:03.989591 | orchestrator | 2026-03-28 01:42:03.989595 | orchestrator | + block_device { 2026-03-28 01:42:03.989600 | orchestrator | + boot_index = 0 2026-03-28 01:42:03.989604 | orchestrator | + delete_on_termination = false 2026-03-28 01:42:03.989608 | orchestrator | + destination_type = "volume" 2026-03-28 01:42:03.989611 | orchestrator | + multiattach = false 2026-03-28 01:42:03.989615 | orchestrator | + source_type = "volume" 2026-03-28 01:42:03.989619 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.989623 | orchestrator | } 2026-03-28 01:42:03.989627 | orchestrator | 2026-03-28 01:42:03.989631 | orchestrator | + network { 2026-03-28 01:42:03.989635 | orchestrator | + access_network = false 2026-03-28 01:42:03.989639 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 01:42:03.989642 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 01:42:03.989646 | orchestrator | + mac = (known after apply) 2026-03-28 01:42:03.989650 | orchestrator | + name = (known after apply) 2026-03-28 01:42:03.989654 | orchestrator | + port = (known after apply) 2026-03-28 01:42:03.989658 | orchestrator | + uuid = (known after apply) 2026-03-28 01:42:03.989662 | orchestrator | } 2026-03-28 01:42:03.989666 | orchestrator | } 2026-03-28 01:42:03.989674 | orchestrator | 2026-03-28 01:42:03.989678 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-28 01:42:03.989682 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-28 01:42:03.989686 | orchestrator | + fingerprint = (known after apply) 2026-03-28 01:42:03.989690 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989694 | orchestrator | + name = "testbed" 2026-03-28 01:42:03.989698 | orchestrator | + private_key = (sensitive value) 2026-03-28 01:42:03.989702 | orchestrator | + public_key = (known after apply) 2026-03-28 01:42:03.989705 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989709 | orchestrator | + user_id = (known after apply) 2026-03-28 01:42:03.989713 | orchestrator | } 2026-03-28 01:42:03.989717 | orchestrator | 2026-03-28 01:42:03.989721 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-28 01:42:03.989725 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989737 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989741 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989745 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989749 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989758 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989763 | orchestrator | } 2026-03-28 01:42:03.989766 | orchestrator | 2026-03-28 01:42:03.989770 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-28 01:42:03.989774 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989778 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989782 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989786 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989790 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989793 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989797 | orchestrator | } 2026-03-28 01:42:03.989801 | orchestrator | 2026-03-28 01:42:03.989805 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-28 01:42:03.989809 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989813 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989816 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989820 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989824 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989828 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989831 | orchestrator | } 2026-03-28 01:42:03.989835 | orchestrator | 2026-03-28 01:42:03.989839 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-28 01:42:03.989843 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989847 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989850 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989854 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989858 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989862 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989866 | orchestrator | } 2026-03-28 01:42:03.989871 | orchestrator | 2026-03-28 01:42:03.989875 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-28 01:42:03.989879 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989883 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989887 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989891 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989894 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989898 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989902 | orchestrator | } 2026-03-28 01:42:03.989906 | orchestrator | 2026-03-28 01:42:03.989910 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-28 01:42:03.989913 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989917 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989921 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989925 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989928 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989932 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989936 | orchestrator | } 2026-03-28 01:42:03.989940 | orchestrator | 2026-03-28 01:42:03.989944 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-28 01:42:03.989947 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989951 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989955 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989959 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.989963 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.989971 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.989975 | orchestrator | } 2026-03-28 01:42:03.989980 | orchestrator | 2026-03-28 01:42:03.989984 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-28 01:42:03.989988 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.989992 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.989996 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.989999 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.990003 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990007 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.990034 | orchestrator | } 2026-03-28 01:42:03.990039 | orchestrator | 2026-03-28 01:42:03.990043 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-28 01:42:03.990047 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 01:42:03.990051 | orchestrator | + device = (known after apply) 2026-03-28 01:42:03.990054 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.990058 | orchestrator | + instance_id = (known after apply) 2026-03-28 01:42:03.990062 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990066 | orchestrator | + volume_id = (known after apply) 2026-03-28 01:42:03.990070 | orchestrator | } 2026-03-28 01:42:03.990076 | orchestrator | 2026-03-28 01:42:03.990079 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-28 01:42:03.990084 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-28 01:42:03.990088 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 01:42:03.990092 | orchestrator | + floating_ip = (known after apply) 2026-03-28 01:42:03.990096 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.990099 | orchestrator | + port_id = (known after apply) 2026-03-28 01:42:03.990103 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990107 | orchestrator | } 2026-03-28 01:42:03.990216 | orchestrator | 2026-03-28 01:42:03.990225 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-28 01:42:03.990229 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-28 01:42:03.990233 | orchestrator | + address = (known after apply) 2026-03-28 01:42:03.990237 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.990245 | orchestrator | + dns_domain = (known after apply) 2026-03-28 01:42:03.990250 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.990253 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 01:42:03.990257 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.990261 | orchestrator | + pool = "public" 2026-03-28 01:42:03.990265 | orchestrator | + port_id = (known after apply) 2026-03-28 01:42:03.990269 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990273 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.990276 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.990280 | orchestrator | } 2026-03-28 01:42:03.990607 | orchestrator | 2026-03-28 01:42:03.990614 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-28 01:42:03.990617 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-28 01:42:03.990621 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.990625 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.990629 | orchestrator | + availability_zone_hints = [ 2026-03-28 01:42:03.990633 | orchestrator | + "nova", 2026-03-28 01:42:03.990637 | orchestrator | ] 2026-03-28 01:42:03.990641 | orchestrator | + dns_domain = (known after apply) 2026-03-28 01:42:03.990645 | orchestrator | + external = (known after apply) 2026-03-28 01:42:03.990649 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.990652 | orchestrator | + mtu = (known after apply) 2026-03-28 01:42:03.990656 | orchestrator | + name = "net-testbed-management" 2026-03-28 01:42:03.990660 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.990669 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.990673 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990676 | orchestrator | + shared = (known after apply) 2026-03-28 01:42:03.990680 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.990684 | orchestrator | + transparent_vlan = (known after apply) 2026-03-28 01:42:03.990688 | orchestrator | 2026-03-28 01:42:03.990692 | orchestrator | + segments (known after apply) 2026-03-28 01:42:03.990696 | orchestrator | } 2026-03-28 01:42:03.990928 | orchestrator | 2026-03-28 01:42:03.990934 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-28 01:42:03.990938 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-28 01:42:03.990942 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.990946 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.990950 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.990953 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.990957 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.990961 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.990965 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.990969 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.990972 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.990976 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.990980 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.990984 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.990988 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.990992 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.990995 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.990999 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.991003 | orchestrator | 2026-03-28 01:42:03.991007 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991011 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.991015 | orchestrator | } 2026-03-28 01:42:03.991018 | orchestrator | 2026-03-28 01:42:03.991022 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.991026 | orchestrator | 2026-03-28 01:42:03.991030 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.991034 | orchestrator | + ip_address = "192.168.16.5" 2026-03-28 01:42:03.991038 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.991042 | orchestrator | } 2026-03-28 01:42:03.991046 | orchestrator | } 2026-03-28 01:42:03.991273 | orchestrator | 2026-03-28 01:42:03.991280 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-28 01:42:03.991283 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.991287 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.991291 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.991295 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.991299 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.991303 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.991306 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.991310 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.991314 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.991317 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.991321 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.991325 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.991329 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.991333 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.991337 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.991345 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.991348 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.991352 | orchestrator | 2026-03-28 01:42:03.991356 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991360 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.991364 | orchestrator | } 2026-03-28 01:42:03.991367 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991371 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.991375 | orchestrator | } 2026-03-28 01:42:03.991379 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991383 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.991386 | orchestrator | } 2026-03-28 01:42:03.991390 | orchestrator | 2026-03-28 01:42:03.991394 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.991398 | orchestrator | 2026-03-28 01:42:03.991401 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.991405 | orchestrator | + ip_address = "192.168.16.10" 2026-03-28 01:42:03.991409 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.991413 | orchestrator | } 2026-03-28 01:42:03.991417 | orchestrator | } 2026-03-28 01:42:03.991596 | orchestrator | 2026-03-28 01:42:03.991602 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-28 01:42:03.991606 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.991614 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.991618 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.991622 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.991626 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.991630 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.991633 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.991637 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.991641 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.991645 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.991649 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.991652 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.991656 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.991660 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.991664 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.991668 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.991671 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.991675 | orchestrator | 2026-03-28 01:42:03.991679 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991683 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.991687 | orchestrator | } 2026-03-28 01:42:03.991691 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991694 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.991698 | orchestrator | } 2026-03-28 01:42:03.991702 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991706 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.991710 | orchestrator | } 2026-03-28 01:42:03.991714 | orchestrator | 2026-03-28 01:42:03.991717 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.991721 | orchestrator | 2026-03-28 01:42:03.991725 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.991729 | orchestrator | + ip_address = "192.168.16.11" 2026-03-28 01:42:03.991733 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.991737 | orchestrator | } 2026-03-28 01:42:03.991740 | orchestrator | } 2026-03-28 01:42:03.991882 | orchestrator | 2026-03-28 01:42:03.991888 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-28 01:42:03.991892 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.991896 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.991900 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.991904 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.991908 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.991916 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.991920 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.991924 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.991928 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.991931 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.991935 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.991939 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.991943 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.991947 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.991951 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.991955 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.991959 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.991962 | orchestrator | 2026-03-28 01:42:03.991966 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991970 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.991974 | orchestrator | } 2026-03-28 01:42:03.991978 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991982 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.991986 | orchestrator | } 2026-03-28 01:42:03.991990 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.991994 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.991997 | orchestrator | } 2026-03-28 01:42:03.992001 | orchestrator | 2026-03-28 01:42:03.992005 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.992009 | orchestrator | 2026-03-28 01:42:03.992013 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.992017 | orchestrator | + ip_address = "192.168.16.12" 2026-03-28 01:42:03.992021 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.992025 | orchestrator | } 2026-03-28 01:42:03.992028 | orchestrator | } 2026-03-28 01:42:03.992107 | orchestrator | 2026-03-28 01:42:03.992113 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-28 01:42:03.992117 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.992120 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.992124 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.992128 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.992132 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.992136 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.992139 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.992143 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.992156 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.992160 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.992164 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.992168 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.992171 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.992175 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.992179 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.992183 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.992186 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.992190 | orchestrator | 2026-03-28 01:42:03.992194 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992198 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.992202 | orchestrator | } 2026-03-28 01:42:03.992206 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992210 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.992214 | orchestrator | } 2026-03-28 01:42:03.992217 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992221 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.992225 | orchestrator | } 2026-03-28 01:42:03.992229 | orchestrator | 2026-03-28 01:42:03.992236 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.992240 | orchestrator | 2026-03-28 01:42:03.992244 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.992248 | orchestrator | + ip_address = "192.168.16.13" 2026-03-28 01:42:03.992252 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.992255 | orchestrator | } 2026-03-28 01:42:03.992259 | orchestrator | } 2026-03-28 01:42:03.992358 | orchestrator | 2026-03-28 01:42:03.992363 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-28 01:42:03.992367 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.992371 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.992374 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.992378 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.992382 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.992386 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.992390 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.992394 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.992397 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.992404 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.992408 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.992412 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.992416 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.992420 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.992424 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.992427 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.992431 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.992437 | orchestrator | 2026-03-28 01:42:03.992441 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992447 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.992451 | orchestrator | } 2026-03-28 01:42:03.992455 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992459 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.992463 | orchestrator | } 2026-03-28 01:42:03.992466 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992470 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.992474 | orchestrator | } 2026-03-28 01:42:03.992478 | orchestrator | 2026-03-28 01:42:03.992482 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.992486 | orchestrator | 2026-03-28 01:42:03.992489 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.992493 | orchestrator | + ip_address = "192.168.16.14" 2026-03-28 01:42:03.992497 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.992501 | orchestrator | } 2026-03-28 01:42:03.992505 | orchestrator | } 2026-03-28 01:42:03.992777 | orchestrator | 2026-03-28 01:42:03.992785 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-28 01:42:03.992789 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 01:42:03.992793 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.992797 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 01:42:03.992801 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 01:42:03.992805 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.992808 | orchestrator | + device_id = (known after apply) 2026-03-28 01:42:03.992812 | orchestrator | + device_owner = (known after apply) 2026-03-28 01:42:03.992816 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 01:42:03.992820 | orchestrator | + dns_name = (known after apply) 2026-03-28 01:42:03.992824 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.992827 | orchestrator | + mac_address = (known after apply) 2026-03-28 01:42:03.992831 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.992835 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 01:42:03.992839 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 01:42:03.992848 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.992852 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 01:42:03.992856 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.992860 | orchestrator | 2026-03-28 01:42:03.992863 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992867 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 01:42:03.992871 | orchestrator | } 2026-03-28 01:42:03.992875 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992879 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 01:42:03.992883 | orchestrator | } 2026-03-28 01:42:03.992887 | orchestrator | + allowed_address_pairs { 2026-03-28 01:42:03.992890 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 01:42:03.992894 | orchestrator | } 2026-03-28 01:42:03.992898 | orchestrator | 2026-03-28 01:42:03.992902 | orchestrator | + binding (known after apply) 2026-03-28 01:42:03.992906 | orchestrator | 2026-03-28 01:42:03.992910 | orchestrator | + fixed_ip { 2026-03-28 01:42:03.992913 | orchestrator | + ip_address = "192.168.16.15" 2026-03-28 01:42:03.992917 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.992921 | orchestrator | } 2026-03-28 01:42:03.992925 | orchestrator | } 2026-03-28 01:42:03.992931 | orchestrator | 2026-03-28 01:42:03.992935 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-28 01:42:03.992939 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-28 01:42:03.992942 | orchestrator | + force_destroy = false 2026-03-28 01:42:03.992946 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.992950 | orchestrator | + port_id = (known after apply) 2026-03-28 01:42:03.992954 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.992958 | orchestrator | + router_id = (known after apply) 2026-03-28 01:42:03.992961 | orchestrator | + subnet_id = (known after apply) 2026-03-28 01:42:03.992965 | orchestrator | } 2026-03-28 01:42:03.993048 | orchestrator | 2026-03-28 01:42:03.993053 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-28 01:42:03.993057 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-28 01:42:03.993061 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 01:42:03.993065 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.993069 | orchestrator | + availability_zone_hints = [ 2026-03-28 01:42:03.993073 | orchestrator | + "nova", 2026-03-28 01:42:03.993077 | orchestrator | ] 2026-03-28 01:42:03.993080 | orchestrator | + distributed = (known after apply) 2026-03-28 01:42:03.993084 | orchestrator | + enable_snat = (known after apply) 2026-03-28 01:42:03.993088 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-28 01:42:03.993092 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-28 01:42:03.993096 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.993100 | orchestrator | + name = "testbed" 2026-03-28 01:42:03.993104 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.993108 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.993112 | orchestrator | 2026-03-28 01:42:03.993116 | orchestrator | + external_fixed_ip (known after apply) 2026-03-28 01:42:03.993119 | orchestrator | } 2026-03-28 01:42:03.993262 | orchestrator | 2026-03-28 01:42:03.993268 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-28 01:42:03.993273 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-28 01:42:03.993276 | orchestrator | + description = "ssh" 2026-03-28 01:42:03.993280 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.993284 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.993288 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.993292 | orchestrator | + port_range_max = 22 2026-03-28 01:42:03.993296 | orchestrator | + port_range_min = 22 2026-03-28 01:42:03.993299 | orchestrator | + protocol = "tcp" 2026-03-28 01:42:03.993303 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.993314 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.993317 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.993321 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.993325 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.993329 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.993332 | orchestrator | } 2026-03-28 01:42:03.993454 | orchestrator | 2026-03-28 01:42:03.993464 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-28 01:42:03.993470 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-28 01:42:03.993474 | orchestrator | + description = "wireguard" 2026-03-28 01:42:03.993478 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.993482 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.993486 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.993490 | orchestrator | + port_range_max = 51820 2026-03-28 01:42:03.993494 | orchestrator | + port_range_min = 51820 2026-03-28 01:42:03.993497 | orchestrator | + protocol = "udp" 2026-03-28 01:42:03.993501 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.993505 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.993509 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.993513 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.993517 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.993521 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.993525 | orchestrator | } 2026-03-28 01:42:03.993586 | orchestrator | 2026-03-28 01:42:03.993591 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-28 01:42:03.993595 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-28 01:42:03.993604 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.993608 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.993612 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.993616 | orchestrator | + protocol = "tcp" 2026-03-28 01:42:03.993620 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.993624 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.993628 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.993631 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 01:42:03.993635 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.993639 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.993643 | orchestrator | } 2026-03-28 01:42:03.993900 | orchestrator | 2026-03-28 01:42:03.993906 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-28 01:42:03.993910 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-28 01:42:03.993913 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.993917 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.993921 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.993925 | orchestrator | + protocol = "udp" 2026-03-28 01:42:03.993929 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.993932 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.993936 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.993940 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 01:42:03.993944 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.993948 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.993951 | orchestrator | } 2026-03-28 01:42:03.994059 | orchestrator | 2026-03-28 01:42:03.994065 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-28 01:42:03.994074 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-28 01:42:03.994078 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.994082 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.994086 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994089 | orchestrator | + protocol = "icmp" 2026-03-28 01:42:03.994093 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994097 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.994101 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.994105 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.994108 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.994112 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994116 | orchestrator | } 2026-03-28 01:42:03.994212 | orchestrator | 2026-03-28 01:42:03.994218 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-28 01:42:03.994222 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-28 01:42:03.994226 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.994230 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.994233 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994237 | orchestrator | + protocol = "tcp" 2026-03-28 01:42:03.994241 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994245 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.994249 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.994253 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.994256 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.994260 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994264 | orchestrator | } 2026-03-28 01:42:03.994283 | orchestrator | 2026-03-28 01:42:03.994287 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-28 01:42:03.994291 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-28 01:42:03.994295 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.994299 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.994303 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994306 | orchestrator | + protocol = "udp" 2026-03-28 01:42:03.994310 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994314 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.994318 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.994322 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.994326 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.994329 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994333 | orchestrator | } 2026-03-28 01:42:03.994371 | orchestrator | 2026-03-28 01:42:03.994376 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-28 01:42:03.994380 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-28 01:42:03.994384 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.994388 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.994391 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994395 | orchestrator | + protocol = "icmp" 2026-03-28 01:42:03.994399 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994403 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.994407 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.994410 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.994414 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.994418 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994425 | orchestrator | } 2026-03-28 01:42:03.994524 | orchestrator | 2026-03-28 01:42:03.994529 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-28 01:42:03.994533 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-28 01:42:03.994537 | orchestrator | + description = "vrrp" 2026-03-28 01:42:03.994541 | orchestrator | + direction = "ingress" 2026-03-28 01:42:03.994545 | orchestrator | + ethertype = "IPv4" 2026-03-28 01:42:03.994549 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994552 | orchestrator | + protocol = "112" 2026-03-28 01:42:03.994556 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994560 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 01:42:03.994564 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 01:42:03.994568 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 01:42:03.994571 | orchestrator | + security_group_id = (known after apply) 2026-03-28 01:42:03.994575 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994579 | orchestrator | } 2026-03-28 01:42:03.994617 | orchestrator | 2026-03-28 01:42:03.994622 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-28 01:42:03.994626 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-28 01:42:03.994629 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.994633 | orchestrator | + description = "management security group" 2026-03-28 01:42:03.994637 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994641 | orchestrator | + name = "testbed-management" 2026-03-28 01:42:03.994645 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994649 | orchestrator | + stateful = (known after apply) 2026-03-28 01:42:03.994652 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994656 | orchestrator | } 2026-03-28 01:42:03.994736 | orchestrator | 2026-03-28 01:42:03.994742 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-28 01:42:03.994746 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-28 01:42:03.994750 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.994753 | orchestrator | + description = "node security group" 2026-03-28 01:42:03.994757 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.994761 | orchestrator | + name = "testbed-node" 2026-03-28 01:42:03.994765 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.994769 | orchestrator | + stateful = (known after apply) 2026-03-28 01:42:03.994772 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.994776 | orchestrator | } 2026-03-28 01:42:03.995158 | orchestrator | 2026-03-28 01:42:03.995165 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-28 01:42:03.995169 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-28 01:42:03.995173 | orchestrator | + all_tags = (known after apply) 2026-03-28 01:42:03.995177 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-28 01:42:03.995181 | orchestrator | + dns_nameservers = [ 2026-03-28 01:42:03.995185 | orchestrator | + "8.8.8.8", 2026-03-28 01:42:03.995189 | orchestrator | + "9.9.9.9", 2026-03-28 01:42:03.995193 | orchestrator | ] 2026-03-28 01:42:03.995197 | orchestrator | + enable_dhcp = true 2026-03-28 01:42:03.995201 | orchestrator | + gateway_ip = (known after apply) 2026-03-28 01:42:03.995208 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.995212 | orchestrator | + ip_version = 4 2026-03-28 01:42:03.995216 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-28 01:42:03.995219 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-28 01:42:03.995223 | orchestrator | + name = "subnet-testbed-management" 2026-03-28 01:42:03.995227 | orchestrator | + network_id = (known after apply) 2026-03-28 01:42:03.995231 | orchestrator | + no_gateway = false 2026-03-28 01:42:03.995235 | orchestrator | + region = (known after apply) 2026-03-28 01:42:03.995238 | orchestrator | + service_types = (known after apply) 2026-03-28 01:42:03.995246 | orchestrator | + tenant_id = (known after apply) 2026-03-28 01:42:03.995250 | orchestrator | 2026-03-28 01:42:03.995254 | orchestrator | + allocation_pool { 2026-03-28 01:42:03.995258 | orchestrator | + end = "192.168.31.250" 2026-03-28 01:42:03.995262 | orchestrator | + start = "192.168.31.200" 2026-03-28 01:42:03.995266 | orchestrator | } 2026-03-28 01:42:03.995269 | orchestrator | } 2026-03-28 01:42:03.995275 | orchestrator | 2026-03-28 01:42:03.995279 | orchestrator | # terraform_data.image will be created 2026-03-28 01:42:03.995283 | orchestrator | + resource "terraform_data" "image" { 2026-03-28 01:42:03.995287 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.995291 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 01:42:03.995294 | orchestrator | + output = (known after apply) 2026-03-28 01:42:03.995298 | orchestrator | } 2026-03-28 01:42:03.995304 | orchestrator | 2026-03-28 01:42:03.995308 | orchestrator | # terraform_data.image_node will be created 2026-03-28 01:42:03.995311 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-28 01:42:03.995315 | orchestrator | + id = (known after apply) 2026-03-28 01:42:03.995319 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 01:42:03.995323 | orchestrator | + output = (known after apply) 2026-03-28 01:42:03.995327 | orchestrator | } 2026-03-28 01:42:03.995331 | orchestrator | 2026-03-28 01:42:03.995334 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-28 01:42:03.995357 | orchestrator | 2026-03-28 01:42:03.995362 | orchestrator | Changes to Outputs: 2026-03-28 01:42:03.995366 | orchestrator | + manager_address = (sensitive value) 2026-03-28 01:42:03.995370 | orchestrator | + private_key = (sensitive value) 2026-03-28 01:42:04.235044 | orchestrator | terraform_data.image: Creating... 2026-03-28 01:42:04.235723 | orchestrator | terraform_data.image: Creation complete after 0s [id=8e01affe-acf2-8db2-e8ac-92364af5059f] 2026-03-28 01:42:04.236141 | orchestrator | terraform_data.image_node: Creating... 2026-03-28 01:42:04.236706 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=8cff3951-a44c-183f-6639-47fae8563b51] 2026-03-28 01:42:04.267214 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-28 01:42:04.267401 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-28 01:42:04.267745 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-28 01:42:04.268677 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-28 01:42:04.269138 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-28 01:42:04.275020 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-28 01:42:04.275130 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-28 01:42:04.279202 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-28 01:42:04.280145 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-28 01:42:04.283374 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-28 01:42:04.779953 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 01:42:04.784356 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-28 01:42:04.785745 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 01:42:04.787592 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-28 01:42:04.793206 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-28 01:42:04.795245 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-28 01:42:05.189694 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=d25f128e-f9cf-4fce-b5c2-9fae94e2ed34] 2026-03-28 01:42:05.201677 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-28 01:42:07.886712 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=85f5c7a4-97d3-420d-8739-a84ebbe15f9e] 2026-03-28 01:42:07.904528 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-28 01:42:07.909029 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=8b974bdf95886cda156e7c92cb0616ba4adc43e7] 2026-03-28 01:42:07.913722 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=56fe6360-407e-41e5-aa3f-c02b23be8c9e] 2026-03-28 01:42:07.923699 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-28 01:42:07.927467 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-28 01:42:07.930359 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=f7a085134b910e0740b8beadf1b4d793a3c24574] 2026-03-28 01:42:07.934683 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=ca153e9b-7080-4ee3-8b85-a6ac7f502dd2] 2026-03-28 01:42:07.935391 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-28 01:42:07.939547 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-28 01:42:07.943248 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a87118b5-ab65-41bd-8772-e2933164117b] 2026-03-28 01:42:07.950556 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-28 01:42:07.955069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=1464ef4d-7de4-47e1-81b9-b7b5db3a3de8] 2026-03-28 01:42:07.960124 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-28 01:42:07.963459 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=c6cb080e-98ea-450b-9996-59c87757dbab] 2026-03-28 01:42:07.967674 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-28 01:42:08.024485 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=ff7faa01-13ed-42f1-881f-ea73c666aa94] 2026-03-28 01:42:08.031668 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=db1b5262-00e3-40b1-8f63-94df47115ae4] 2026-03-28 01:42:08.034855 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=67aa0ce5-3e47-424e-8717-6160a44d1ef7] 2026-03-28 01:42:08.036758 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-28 01:42:08.531534 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=c77014e9-a354-44fa-b62b-eaaba3b9788d] 2026-03-28 01:42:08.765660 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=38eb5c4a-d3ff-4a45-a5a2-b05bdbf3d1eb] 2026-03-28 01:42:08.773368 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-28 01:42:11.307486 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=2896204d-ece7-4cc8-bdd6-31efe6d1f785] 2026-03-28 01:42:11.340962 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=0af52fc6-9f61-4e53-b423-bede1fc620c7] 2026-03-28 01:42:11.363410 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=e4bb62b9-2528-4afd-b7c7-20e80296c6f7] 2026-03-28 01:42:11.395572 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=1b8082e3-0236-4677-af0b-8478c2d5c241] 2026-03-28 01:42:11.401399 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=791014d9-bcf5-4b2a-8a4f-8adbb33edda6] 2026-03-28 01:42:11.458758 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=913ffec0-7e23-4596-ab58-7f688cd8a74f] 2026-03-28 01:42:12.204245 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=026e2b07-7ae4-44fc-bd0e-e4309fffc5d6] 2026-03-28 01:42:12.212412 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-28 01:42:12.214134 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-28 01:42:12.215324 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-28 01:42:12.375208 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=aa27daaf-099f-4b82-b68b-f17b4afe7db9] 2026-03-28 01:42:12.387591 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-28 01:42:12.392035 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-28 01:42:12.392827 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-28 01:42:12.393017 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-28 01:42:12.396747 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-28 01:42:12.399429 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-28 01:42:12.404383 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a6a9b0ea-8a29-49ba-a3a7-8143ce32f67a] 2026-03-28 01:42:12.405516 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-28 01:42:12.408057 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-28 01:42:12.418577 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-28 01:42:12.851033 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=bc186c58-048c-4933-a175-6807eb81818c] 2026-03-28 01:42:12.863735 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-28 01:42:13.023984 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=2fc40bd8-000d-413f-a1d3-ec047d23a642] 2026-03-28 01:42:13.037472 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-28 01:42:13.211982 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=8f52a1c0-2b0e-4c65-aafa-b47c72baa7dd] 2026-03-28 01:42:13.219310 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-28 01:42:13.220632 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=604e9214-e0f3-4891-a5d9-c0556b60dd65] 2026-03-28 01:42:13.228604 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-28 01:42:13.230166 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=2fc77c78-1505-4cd1-acb4-e827f542ad41] 2026-03-28 01:42:13.236214 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-28 01:42:13.436481 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=870c12a3-45f7-4ed1-a9b3-a2a2a43972ad] 2026-03-28 01:42:13.446094 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-28 01:42:13.570648 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=0a238f8c-7480-4c12-bf2c-a0ce3768a613] 2026-03-28 01:42:13.578701 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-28 01:42:13.599946 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=e5efb26f-aac4-459a-b33a-5452f08a5fc1] 2026-03-28 01:42:13.601846 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=6ae59ec6-bdbf-4d3d-9ba9-823adc544c8c] 2026-03-28 01:42:13.604029 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=bbd716a5-d13c-4a6d-84cd-50d2ac83b39b] 2026-03-28 01:42:13.635641 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=9525412f-cd6f-442e-861f-93df5982361d] 2026-03-28 01:42:13.760388 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=a584049f-d962-4c3c-a4f9-2debe80684d5] 2026-03-28 01:42:13.763450 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=2df743ef-ab9f-4b61-a8d9-c81bd7368ca5] 2026-03-28 01:42:13.967458 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=45e32f1d-5913-4982-89d9-93e7aefb29c1] 2026-03-28 01:42:14.115613 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=7d41fa94-f904-4e53-8b11-0d391ef302e1] 2026-03-28 01:42:14.269130 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=1e5bcb18-faa8-4616-bbb5-ee98cac32ece] 2026-03-28 01:42:14.670539 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=fce66624-c0d4-4651-b5b5-6ef3a8cd2df2] 2026-03-28 01:42:14.702235 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-28 01:42:14.716785 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-28 01:42:14.716904 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-28 01:42:14.718701 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-28 01:42:14.720320 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-28 01:42:14.733178 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-28 01:42:14.741769 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-28 01:42:16.025790 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=97420226-4acf-4f78-a6d1-89d61113f070] 2026-03-28 01:42:16.035727 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-28 01:42:16.036543 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-28 01:42:16.036597 | orchestrator | local_file.inventory: Creating... 2026-03-28 01:42:16.039905 | orchestrator | local_file.inventory: Creation complete after 0s [id=ba91b239cdaecc52c51565c9c8316e94425fa7ff] 2026-03-28 01:42:16.041298 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=312e796e8573c58c891855f2d6a4779be587c041] 2026-03-28 01:42:16.754195 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=97420226-4acf-4f78-a6d1-89d61113f070] 2026-03-28 01:42:24.717514 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-28 01:42:24.717640 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-28 01:42:24.721536 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-28 01:42:24.731757 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-28 01:42:24.736849 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-28 01:42:24.743959 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-28 01:42:34.718399 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-28 01:42:34.718516 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-28 01:42:34.722531 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-28 01:42:34.732895 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-28 01:42:34.737183 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-28 01:42:34.744364 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-28 01:42:35.118940 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=5571ebce-cd1f-4b9b-99bc-db8849082e35] 2026-03-28 01:42:35.151314 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=294fcd47-dda1-4bfa-ac52-e57b1bdaf1dd] 2026-03-28 01:42:35.346961 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=9eb7a439-3549-4216-a173-c164fc6a26e1] 2026-03-28 01:42:44.720041 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-28 01:42:44.737517 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-28 01:42:44.744781 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-28 01:42:45.307506 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=b3d199cc-2c76-462a-8064-85a9c5abd354] 2026-03-28 01:42:45.389765 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=fafcfced-3226-4538-a528-c1592e42f0db] 2026-03-28 01:42:45.479030 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=8763eae3-e417-4067-9046-196623603894] 2026-03-28 01:42:45.497454 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-28 01:42:45.498986 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-28 01:42:45.506095 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-28 01:42:45.518504 | orchestrator | null_resource.node_semaphore: Creation complete after 1s [id=5274892845500402449] 2026-03-28 01:42:45.524880 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-28 01:42:45.530066 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-28 01:42:45.531324 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-28 01:42:45.535450 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-28 01:42:45.539099 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-28 01:42:45.545104 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-28 01:42:45.578373 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-28 01:42:45.580068 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-28 01:42:48.910970 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=294fcd47-dda1-4bfa-ac52-e57b1bdaf1dd/67aa0ce5-3e47-424e-8717-6160a44d1ef7] 2026-03-28 01:42:48.915174 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=fafcfced-3226-4538-a528-c1592e42f0db/ca153e9b-7080-4ee3-8b85-a6ac7f502dd2] 2026-03-28 01:42:48.942389 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=8763eae3-e417-4067-9046-196623603894/a87118b5-ab65-41bd-8772-e2933164117b] 2026-03-28 01:42:48.944479 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=fafcfced-3226-4538-a528-c1592e42f0db/ff7faa01-13ed-42f1-881f-ea73c666aa94] 2026-03-28 01:42:48.970828 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=294fcd47-dda1-4bfa-ac52-e57b1bdaf1dd/c6cb080e-98ea-450b-9996-59c87757dbab] 2026-03-28 01:42:48.972887 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=8763eae3-e417-4067-9046-196623603894/1464ef4d-7de4-47e1-81b9-b7b5db3a3de8] 2026-03-28 01:42:55.062433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=294fcd47-dda1-4bfa-ac52-e57b1bdaf1dd/db1b5262-00e3-40b1-8f63-94df47115ae4] 2026-03-28 01:42:55.067348 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=fafcfced-3226-4538-a528-c1592e42f0db/56fe6360-407e-41e5-aa3f-c02b23be8c9e] 2026-03-28 01:42:55.099022 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=8763eae3-e417-4067-9046-196623603894/85f5c7a4-97d3-420d-8739-a84ebbe15f9e] 2026-03-28 01:42:55.582773 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-28 01:43:05.583594 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-28 01:43:05.945630 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=746e7d25-e007-4928-857f-200b93dc77ad] 2026-03-28 01:43:05.964860 | orchestrator | 2026-03-28 01:43:05.964947 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-28 01:43:05.964962 | orchestrator | 2026-03-28 01:43:05.964973 | orchestrator | Outputs: 2026-03-28 01:43:05.964983 | orchestrator | 2026-03-28 01:43:05.964993 | orchestrator | manager_address = 2026-03-28 01:43:05.965005 | orchestrator | private_key = 2026-03-28 01:43:06.413263 | orchestrator | ok: Runtime: 0:01:10.082557 2026-03-28 01:43:06.450019 | 2026-03-28 01:43:06.450184 | TASK [Fetch manager address] 2026-03-28 01:43:06.895530 | orchestrator | ok 2026-03-28 01:43:06.911862 | 2026-03-28 01:43:06.912025 | TASK [Set manager_host address] 2026-03-28 01:43:06.993285 | orchestrator | ok 2026-03-28 01:43:07.003806 | 2026-03-28 01:43:07.003942 | LOOP [Update ansible collections] 2026-03-28 01:43:07.851424 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 01:43:07.851824 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 01:43:07.851876 | orchestrator | Starting galaxy collection install process 2026-03-28 01:43:07.851915 | orchestrator | Process install dependency map 2026-03-28 01:43:07.851946 | orchestrator | Starting collection install process 2026-03-28 01:43:07.851976 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-28 01:43:07.852010 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-28 01:43:07.852046 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-28 01:43:07.852110 | orchestrator | ok: Item: commons Runtime: 0:00:00.528441 2026-03-28 01:43:08.727176 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 01:43:08.727350 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 01:43:08.727403 | orchestrator | Starting galaxy collection install process 2026-03-28 01:43:08.727443 | orchestrator | Process install dependency map 2026-03-28 01:43:08.727482 | orchestrator | Starting collection install process 2026-03-28 01:43:08.727517 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-28 01:43:08.727573 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-28 01:43:08.727610 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-28 01:43:08.727664 | orchestrator | ok: Item: services Runtime: 0:00:00.602042 2026-03-28 01:43:08.749992 | 2026-03-28 01:43:08.750260 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 01:43:21.572736 | orchestrator | ok 2026-03-28 01:43:21.583048 | 2026-03-28 01:43:21.583185 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 01:44:21.637452 | orchestrator | ok 2026-03-28 01:44:21.650705 | 2026-03-28 01:44:21.650997 | TASK [Fetch manager ssh hostkey] 2026-03-28 01:44:23.235681 | orchestrator | Output suppressed because no_log was given 2026-03-28 01:44:23.251029 | 2026-03-28 01:44:23.251221 | TASK [Get ssh keypair from terraform environment] 2026-03-28 01:44:23.787310 | orchestrator | ok: Runtime: 0:00:00.009439 2026-03-28 01:44:23.804158 | 2026-03-28 01:44:23.804341 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 01:44:23.843229 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-28 01:44:23.854435 | 2026-03-28 01:44:23.854634 | TASK [Run manager part 0] 2026-03-28 01:44:24.729710 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 01:44:24.773514 | orchestrator | 2026-03-28 01:44:24.773562 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-28 01:44:24.773570 | orchestrator | 2026-03-28 01:44:24.773584 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-28 01:44:26.717630 | orchestrator | ok: [testbed-manager] 2026-03-28 01:44:26.717686 | orchestrator | 2026-03-28 01:44:26.717711 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 01:44:26.717721 | orchestrator | 2026-03-28 01:44:26.717730 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:44:28.653409 | orchestrator | ok: [testbed-manager] 2026-03-28 01:44:28.653468 | orchestrator | 2026-03-28 01:44:28.653476 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 01:44:29.351360 | orchestrator | ok: [testbed-manager] 2026-03-28 01:44:29.351418 | orchestrator | 2026-03-28 01:44:29.351425 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 01:44:29.389741 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:44:29.389782 | orchestrator | 2026-03-28 01:44:29.389792 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-28 01:44:29.427452 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:44:29.427495 | orchestrator | 2026-03-28 01:44:29.427502 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-28 01:44:29.454605 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:44:29.454657 | orchestrator | 2026-03-28 01:44:29.454663 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-28 01:44:30.307939 | orchestrator | changed: [testbed-manager] 2026-03-28 01:44:30.307988 | orchestrator | 2026-03-28 01:44:30.307995 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-28 01:47:46.842604 | orchestrator | changed: [testbed-manager] 2026-03-28 01:47:46.842724 | orchestrator | 2026-03-28 01:47:46.842745 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 01:49:05.034059 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:05.034240 | orchestrator | 2026-03-28 01:49:05.034267 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-28 01:49:26.192100 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:26.192202 | orchestrator | 2026-03-28 01:49:26.192221 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-28 01:49:35.809172 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:35.809303 | orchestrator | 2026-03-28 01:49:35.809320 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 01:49:35.857654 | orchestrator | ok: [testbed-manager] 2026-03-28 01:49:35.857771 | orchestrator | 2026-03-28 01:49:35.857789 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-28 01:49:36.656696 | orchestrator | ok: [testbed-manager] 2026-03-28 01:49:36.656797 | orchestrator | 2026-03-28 01:49:36.656814 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-28 01:49:37.441571 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:37.441694 | orchestrator | 2026-03-28 01:49:37.441717 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-28 01:49:44.030736 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:44.030821 | orchestrator | 2026-03-28 01:49:44.030834 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-28 01:49:50.476449 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:50.476554 | orchestrator | 2026-03-28 01:49:50.476571 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-28 01:49:53.230686 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:53.230798 | orchestrator | 2026-03-28 01:49:53.230817 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-28 01:49:55.227401 | orchestrator | changed: [testbed-manager] 2026-03-28 01:49:55.227461 | orchestrator | 2026-03-28 01:49:55.227477 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-28 01:49:56.355079 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 01:49:56.355199 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 01:49:56.355215 | orchestrator | 2026-03-28 01:49:56.355231 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-28 01:49:56.401645 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 01:49:56.401737 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 01:49:56.401752 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 01:49:56.401767 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 01:49:59.680361 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 01:49:59.680457 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 01:49:59.680472 | orchestrator | 2026-03-28 01:49:59.680485 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-28 01:50:00.267426 | orchestrator | changed: [testbed-manager] 2026-03-28 01:50:00.267517 | orchestrator | 2026-03-28 01:50:00.267533 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-28 01:55:21.325348 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-28 01:55:21.325407 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-28 01:55:21.325416 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-28 01:55:21.325423 | orchestrator | 2026-03-28 01:55:21.325431 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-28 01:55:23.730874 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-28 01:55:23.730969 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-28 01:55:23.730985 | orchestrator | 2026-03-28 01:55:23.731000 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-28 01:55:23.731012 | orchestrator | 2026-03-28 01:55:23.731024 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:55:25.182346 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:25.182453 | orchestrator | 2026-03-28 01:55:25.182471 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 01:55:25.225465 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:25.225542 | orchestrator | 2026-03-28 01:55:25.225558 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 01:55:25.288535 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:25.288628 | orchestrator | 2026-03-28 01:55:25.288646 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 01:55:26.088638 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:26.088731 | orchestrator | 2026-03-28 01:55:26.088750 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 01:55:26.842281 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:26.842378 | orchestrator | 2026-03-28 01:55:26.842399 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 01:55:28.249787 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-28 01:55:28.249920 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-28 01:55:28.249937 | orchestrator | 2026-03-28 01:55:28.249951 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 01:55:29.689384 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:29.689429 | orchestrator | 2026-03-28 01:55:29.689437 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 01:55:31.494877 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 01:55:31.494997 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-28 01:55:31.495045 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-28 01:55:31.495073 | orchestrator | 2026-03-28 01:55:31.495094 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 01:55:31.554294 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:31.554373 | orchestrator | 2026-03-28 01:55:31.554386 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 01:55:31.638159 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:31.638258 | orchestrator | 2026-03-28 01:55:31.638274 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 01:55:32.233665 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:32.233747 | orchestrator | 2026-03-28 01:55:32.233760 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 01:55:32.299684 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:32.299776 | orchestrator | 2026-03-28 01:55:32.299792 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 01:55:33.159348 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 01:55:33.159488 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:33.159507 | orchestrator | 2026-03-28 01:55:33.159520 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 01:55:33.199711 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:33.199798 | orchestrator | 2026-03-28 01:55:33.199813 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 01:55:33.237063 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:33.237149 | orchestrator | 2026-03-28 01:55:33.237164 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 01:55:33.275227 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:33.275311 | orchestrator | 2026-03-28 01:55:33.275326 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 01:55:33.346755 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:33.346866 | orchestrator | 2026-03-28 01:55:33.346882 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 01:55:34.048283 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:34.048326 | orchestrator | 2026-03-28 01:55:34.048332 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 01:55:34.048336 | orchestrator | 2026-03-28 01:55:34.048341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:55:35.390798 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:35.390870 | orchestrator | 2026-03-28 01:55:35.390884 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-28 01:55:36.306614 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:36.306691 | orchestrator | 2026-03-28 01:55:36.306707 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:55:36.306720 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-28 01:55:36.306731 | orchestrator | 2026-03-28 01:55:36.816618 | orchestrator | ok: Runtime: 0:11:12.232320 2026-03-28 01:55:36.836169 | 2026-03-28 01:55:36.836325 | TASK [Point out that the log in on the manager is now possible] 2026-03-28 01:55:36.874333 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-28 01:55:36.885187 | 2026-03-28 01:55:36.885336 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 01:55:36.923650 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-28 01:55:36.933130 | 2026-03-28 01:55:36.933275 | TASK [Run manager part 1 + 2] 2026-03-28 01:55:37.692478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 01:55:37.736160 | orchestrator | 2026-03-28 01:55:37.736196 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-28 01:55:37.736203 | orchestrator | 2026-03-28 01:55:37.736213 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:55:40.531379 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:40.531467 | orchestrator | 2026-03-28 01:55:40.531510 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-28 01:55:40.559274 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:40.559315 | orchestrator | 2026-03-28 01:55:40.559324 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 01:55:40.585540 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:40.585578 | orchestrator | 2026-03-28 01:55:40.585587 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 01:55:40.611205 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:40.611242 | orchestrator | 2026-03-28 01:55:40.611248 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 01:55:40.670967 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:40.671058 | orchestrator | 2026-03-28 01:55:40.671077 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 01:55:40.742100 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:40.742174 | orchestrator | 2026-03-28 01:55:40.742189 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 01:55:40.794681 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-28 01:55:40.794747 | orchestrator | 2026-03-28 01:55:40.794762 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 01:55:41.421417 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:41.421586 | orchestrator | 2026-03-28 01:55:41.421608 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 01:55:41.469092 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:55:41.469185 | orchestrator | 2026-03-28 01:55:41.469213 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 01:55:42.745088 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:42.745171 | orchestrator | 2026-03-28 01:55:42.745190 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 01:55:43.296875 | orchestrator | ok: [testbed-manager] 2026-03-28 01:55:43.296978 | orchestrator | 2026-03-28 01:55:43.296995 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 01:55:44.479142 | orchestrator | changed: [testbed-manager] 2026-03-28 01:55:44.479229 | orchestrator | 2026-03-28 01:55:44.479255 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 01:56:00.692866 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:00.692944 | orchestrator | 2026-03-28 01:56:00.692961 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 01:56:01.375762 | orchestrator | ok: [testbed-manager] 2026-03-28 01:56:01.375850 | orchestrator | 2026-03-28 01:56:01.375868 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 01:56:01.431739 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:56:01.431780 | orchestrator | 2026-03-28 01:56:01.431788 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-28 01:56:02.413957 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:02.414001 | orchestrator | 2026-03-28 01:56:02.414009 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-28 01:56:03.433823 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:03.433920 | orchestrator | 2026-03-28 01:56:03.433939 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-28 01:56:04.058710 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:04.058798 | orchestrator | 2026-03-28 01:56:04.058815 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-28 01:56:04.098905 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 01:56:04.098976 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 01:56:04.098983 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 01:56:04.098988 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 01:56:06.115131 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:06.115233 | orchestrator | 2026-03-28 01:56:06.115250 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-28 01:56:15.346528 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-28 01:56:15.346611 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-28 01:56:15.346623 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-28 01:56:15.346632 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-28 01:56:15.346647 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-28 01:56:15.346655 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-28 01:56:15.346675 | orchestrator | 2026-03-28 01:56:15.346685 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-28 01:56:16.423750 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:16.423859 | orchestrator | 2026-03-28 01:56:16.423883 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-28 01:56:19.704359 | orchestrator | changed: [testbed-manager] 2026-03-28 01:56:19.704450 | orchestrator | 2026-03-28 01:56:19.704466 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-28 01:56:19.746323 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:56:19.746407 | orchestrator | 2026-03-28 01:56:19.746422 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-28 01:58:03.574573 | orchestrator | changed: [testbed-manager] 2026-03-28 01:58:03.574673 | orchestrator | 2026-03-28 01:58:03.574691 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 01:58:04.777754 | orchestrator | ok: [testbed-manager] 2026-03-28 01:58:04.777825 | orchestrator | 2026-03-28 01:58:04.777842 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:58:04.777855 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-28 01:58:04.777868 | orchestrator | 2026-03-28 01:58:05.077441 | orchestrator | ok: Runtime: 0:02:27.670045 2026-03-28 01:58:05.094937 | 2026-03-28 01:58:05.095131 | TASK [Reboot manager] 2026-03-28 01:58:06.640788 | orchestrator | ok: Runtime: 0:00:00.975564 2026-03-28 01:58:06.658239 | 2026-03-28 01:58:06.658486 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 01:58:23.082258 | orchestrator | ok 2026-03-28 01:58:23.093228 | 2026-03-28 01:58:23.093392 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 01:59:23.133038 | orchestrator | ok 2026-03-28 01:59:23.142930 | 2026-03-28 01:59:23.143084 | TASK [Deploy manager + bootstrap nodes] 2026-03-28 01:59:25.848976 | orchestrator | 2026-03-28 01:59:25.849229 | orchestrator | # DEPLOY MANAGER 2026-03-28 01:59:25.849255 | orchestrator | 2026-03-28 01:59:25.849270 | orchestrator | + set -e 2026-03-28 01:59:25.849283 | orchestrator | + echo 2026-03-28 01:59:25.849297 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-28 01:59:25.849315 | orchestrator | + echo 2026-03-28 01:59:25.849365 | orchestrator | + cat /opt/manager-vars.sh 2026-03-28 01:59:25.852284 | orchestrator | export NUMBER_OF_NODES=6 2026-03-28 01:59:25.852328 | orchestrator | 2026-03-28 01:59:25.852351 | orchestrator | export CEPH_VERSION=reef 2026-03-28 01:59:25.852372 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-28 01:59:25.852393 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-28 01:59:25.852425 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-28 01:59:25.852445 | orchestrator | 2026-03-28 01:59:25.852474 | orchestrator | export ARA=false 2026-03-28 01:59:25.852495 | orchestrator | export DEPLOY_MODE=manager 2026-03-28 01:59:25.852522 | orchestrator | export TEMPEST=false 2026-03-28 01:59:25.852544 | orchestrator | export IS_ZUUL=true 2026-03-28 01:59:25.852564 | orchestrator | 2026-03-28 01:59:25.852593 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 01:59:25.852615 | orchestrator | export EXTERNAL_API=false 2026-03-28 01:59:25.852633 | orchestrator | 2026-03-28 01:59:25.852652 | orchestrator | export IMAGE_USER=ubuntu 2026-03-28 01:59:25.852676 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-28 01:59:25.852697 | orchestrator | 2026-03-28 01:59:25.852716 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-28 01:59:25.852745 | orchestrator | 2026-03-28 01:59:25.852766 | orchestrator | + echo 2026-03-28 01:59:25.852786 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:59:25.853718 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:59:25.853762 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:59:25.853778 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:59:25.853796 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:59:25.853816 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:59:25.853828 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:59:25.853840 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:59:25.853984 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:59:25.854000 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:59:25.854012 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:59:25.854079 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:59:25.854116 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:59:25.854159 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:59:25.854177 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:59:25.854204 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:59:25.854231 | orchestrator | ++ export ARA=false 2026-03-28 01:59:25.854250 | orchestrator | ++ ARA=false 2026-03-28 01:59:25.854268 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:59:25.854286 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:59:25.854304 | orchestrator | ++ export TEMPEST=false 2026-03-28 01:59:25.854321 | orchestrator | ++ TEMPEST=false 2026-03-28 01:59:25.854339 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:59:25.854357 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:59:25.854374 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 01:59:25.854392 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 01:59:25.854410 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:59:25.854427 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:59:25.854445 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:59:25.854462 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:59:25.854481 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:59:25.854501 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:59:25.854522 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:59:25.854541 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:59:25.854573 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-28 01:59:25.915385 | orchestrator | + docker version 2026-03-28 01:59:26.026845 | orchestrator | Client: Docker Engine - Community 2026-03-28 01:59:26.026954 | orchestrator | Version: 27.5.1 2026-03-28 01:59:26.026972 | orchestrator | API version: 1.47 2026-03-28 01:59:26.026984 | orchestrator | Go version: go1.22.11 2026-03-28 01:59:26.026994 | orchestrator | Git commit: 9f9e405 2026-03-28 01:59:26.027006 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 01:59:26.027017 | orchestrator | OS/Arch: linux/amd64 2026-03-28 01:59:26.027028 | orchestrator | Context: default 2026-03-28 01:59:26.027046 | orchestrator | 2026-03-28 01:59:26.027058 | orchestrator | Server: Docker Engine - Community 2026-03-28 01:59:26.027069 | orchestrator | Engine: 2026-03-28 01:59:26.027134 | orchestrator | Version: 27.5.1 2026-03-28 01:59:26.027149 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-28 01:59:26.027191 | orchestrator | Go version: go1.22.11 2026-03-28 01:59:26.027202 | orchestrator | Git commit: 4c9b3b0 2026-03-28 01:59:26.027214 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 01:59:26.027231 | orchestrator | OS/Arch: linux/amd64 2026-03-28 01:59:26.027242 | orchestrator | Experimental: false 2026-03-28 01:59:26.027253 | orchestrator | containerd: 2026-03-28 01:59:26.027271 | orchestrator | Version: v2.2.2 2026-03-28 01:59:26.027282 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-28 01:59:26.027294 | orchestrator | runc: 2026-03-28 01:59:26.027676 | orchestrator | Version: 1.3.4 2026-03-28 01:59:26.027715 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-28 01:59:26.027731 | orchestrator | docker-init: 2026-03-28 01:59:26.027745 | orchestrator | Version: 0.19.0 2026-03-28 01:59:26.027758 | orchestrator | GitCommit: de40ad0 2026-03-28 01:59:26.031187 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-28 01:59:26.041031 | orchestrator | + set -e 2026-03-28 01:59:26.041145 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:59:26.041159 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:59:26.041170 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:59:26.041181 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:59:26.041192 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:59:26.041203 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:59:26.041215 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:59:26.041231 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 01:59:26.041246 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 01:59:26.041257 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 01:59:26.041268 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 01:59:26.041279 | orchestrator | ++ export ARA=false 2026-03-28 01:59:26.041297 | orchestrator | ++ ARA=false 2026-03-28 01:59:26.041322 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:59:26.041335 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:59:26.041346 | orchestrator | ++ export TEMPEST=false 2026-03-28 01:59:26.041357 | orchestrator | ++ TEMPEST=false 2026-03-28 01:59:26.041367 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:59:26.041378 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:59:26.041389 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 01:59:26.041400 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 01:59:26.041410 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:59:26.041421 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:59:26.041431 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:59:26.041442 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:59:26.041453 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:59:26.041464 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:59:26.041474 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:59:26.041485 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:59:26.041496 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:59:26.041510 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:59:26.041521 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:59:26.041532 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:59:26.041547 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:59:26.041702 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 01:59:26.041719 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-28 01:59:26.048192 | orchestrator | + set -e 2026-03-28 01:59:26.048277 | orchestrator | + VERSION=9.5.0 2026-03-28 01:59:26.048296 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:59:26.058373 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 01:59:26.058472 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:59:26.063417 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:59:26.068926 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-28 01:59:26.077911 | orchestrator | /opt/configuration ~ 2026-03-28 01:59:26.077970 | orchestrator | + set -e 2026-03-28 01:59:26.077983 | orchestrator | + pushd /opt/configuration 2026-03-28 01:59:26.077992 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 01:59:26.079928 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 01:59:26.080983 | orchestrator | ++ deactivate nondestructive 2026-03-28 01:59:26.081033 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:26.081042 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:26.081068 | orchestrator | ++ hash -r 2026-03-28 01:59:26.081080 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:26.081086 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 01:59:26.081111 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 01:59:26.081120 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 01:59:26.081552 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 01:59:26.081586 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 01:59:26.081595 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 01:59:26.081606 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 01:59:26.081612 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 01:59:26.081622 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 01:59:26.081628 | orchestrator | ++ export PATH 2026-03-28 01:59:26.081770 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:26.081778 | orchestrator | ++ '[' -z '' ']' 2026-03-28 01:59:26.081784 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 01:59:26.081790 | orchestrator | ++ PS1='(venv) ' 2026-03-28 01:59:26.081795 | orchestrator | ++ export PS1 2026-03-28 01:59:26.081801 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 01:59:26.081914 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 01:59:26.081923 | orchestrator | ++ hash -r 2026-03-28 01:59:26.082002 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-28 01:59:27.248481 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-28 01:59:27.249476 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-28 01:59:27.251317 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-28 01:59:27.252669 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-28 01:59:27.253903 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-28 01:59:27.263792 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-28 01:59:27.265367 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-28 01:59:27.266433 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-28 01:59:27.267785 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-28 01:59:27.306580 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-28 01:59:27.308002 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-28 01:59:27.309635 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-28 01:59:27.311075 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-28 01:59:27.314874 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-28 01:59:27.529717 | orchestrator | ++ which gilt 2026-03-28 01:59:27.533552 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-28 01:59:27.533625 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-28 01:59:27.774717 | orchestrator | osism.cfg-generics: 2026-03-28 01:59:27.942382 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-28 01:59:27.942488 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-28 01:59:27.942759 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-28 01:59:27.942932 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-28 01:59:29.001561 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-28 01:59:29.014990 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-28 01:59:29.383687 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-28 01:59:29.429429 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 01:59:29.429536 | orchestrator | + deactivate 2026-03-28 01:59:29.429552 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 01:59:29.429565 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 01:59:29.429575 | orchestrator | + export PATH 2026-03-28 01:59:29.429585 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 01:59:29.429595 | orchestrator | + '[' -n '' ']' 2026-03-28 01:59:29.429607 | orchestrator | + hash -r 2026-03-28 01:59:29.429617 | orchestrator | + '[' -n '' ']' 2026-03-28 01:59:29.429626 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 01:59:29.429636 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 01:59:29.429645 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 01:59:29.429655 | orchestrator | ~ 2026-03-28 01:59:29.429665 | orchestrator | + unset -f deactivate 2026-03-28 01:59:29.429675 | orchestrator | + popd 2026-03-28 01:59:29.431696 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 01:59:29.431760 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-28 01:59:29.433305 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 01:59:29.494341 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 01:59:29.494437 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-28 01:59:29.495534 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 01:59:29.557815 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:59:29.558629 | orchestrator | ++ semver 2024.2 2025.1 2026-03-28 01:59:29.622498 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:59:29.622611 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-28 01:59:29.719238 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 01:59:29.719362 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 01:59:29.719395 | orchestrator | ++ deactivate nondestructive 2026-03-28 01:59:29.719426 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:29.719453 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:29.719465 | orchestrator | ++ hash -r 2026-03-28 01:59:29.719490 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:29.719502 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 01:59:29.719512 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 01:59:29.719523 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 01:59:29.719535 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 01:59:29.719546 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 01:59:29.719576 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 01:59:29.719588 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 01:59:29.719617 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 01:59:29.719664 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 01:59:29.719677 | orchestrator | ++ export PATH 2026-03-28 01:59:29.719692 | orchestrator | ++ '[' -n '' ']' 2026-03-28 01:59:29.719704 | orchestrator | ++ '[' -z '' ']' 2026-03-28 01:59:29.719779 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 01:59:29.719812 | orchestrator | ++ PS1='(venv) ' 2026-03-28 01:59:29.719823 | orchestrator | ++ export PS1 2026-03-28 01:59:29.719835 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 01:59:29.719846 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 01:59:29.719860 | orchestrator | ++ hash -r 2026-03-28 01:59:29.720039 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-28 01:59:30.928573 | orchestrator | 2026-03-28 01:59:30.928650 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-28 01:59:30.928666 | orchestrator | 2026-03-28 01:59:30.928676 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 01:59:31.521072 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:31.521205 | orchestrator | 2026-03-28 01:59:31.521221 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 01:59:32.522982 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:32.523113 | orchestrator | 2026-03-28 01:59:32.523166 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-28 01:59:32.523203 | orchestrator | 2026-03-28 01:59:32.523213 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:59:35.926182 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:35.926293 | orchestrator | 2026-03-28 01:59:35.926310 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-28 01:59:35.971095 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:35.971210 | orchestrator | 2026-03-28 01:59:35.971227 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-28 01:59:36.450560 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:36.450662 | orchestrator | 2026-03-28 01:59:36.450679 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-28 01:59:36.494824 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:59:36.494914 | orchestrator | 2026-03-28 01:59:36.494928 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 01:59:36.845745 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:36.845845 | orchestrator | 2026-03-28 01:59:36.845860 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-28 01:59:37.186880 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:37.187001 | orchestrator | 2026-03-28 01:59:37.187026 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-28 01:59:37.294972 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:59:37.295071 | orchestrator | 2026-03-28 01:59:37.295087 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-28 01:59:37.295099 | orchestrator | 2026-03-28 01:59:37.295110 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:59:39.055476 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:39.055577 | orchestrator | 2026-03-28 01:59:39.055592 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-28 01:59:39.176079 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-28 01:59:39.176277 | orchestrator | 2026-03-28 01:59:39.176298 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-28 01:59:39.234744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-28 01:59:39.234862 | orchestrator | 2026-03-28 01:59:39.234885 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-28 01:59:40.368349 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-28 01:59:40.368474 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-28 01:59:40.368500 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-28 01:59:40.368521 | orchestrator | 2026-03-28 01:59:40.368545 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-28 01:59:42.233655 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-28 01:59:42.233773 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-28 01:59:42.233793 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-28 01:59:42.233807 | orchestrator | 2026-03-28 01:59:42.233821 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-28 01:59:42.910667 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 01:59:42.910759 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:42.910772 | orchestrator | 2026-03-28 01:59:42.910783 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-28 01:59:43.616887 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 01:59:43.616990 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:43.617006 | orchestrator | 2026-03-28 01:59:43.617019 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-28 01:59:43.673675 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:59:43.673775 | orchestrator | 2026-03-28 01:59:43.673799 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-28 01:59:44.053860 | orchestrator | ok: [testbed-manager] 2026-03-28 01:59:44.053961 | orchestrator | 2026-03-28 01:59:44.053977 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-28 01:59:44.130316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-28 01:59:44.130418 | orchestrator | 2026-03-28 01:59:44.130443 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-28 01:59:45.337541 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:45.337654 | orchestrator | 2026-03-28 01:59:45.337675 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-28 01:59:46.188443 | orchestrator | changed: [testbed-manager] 2026-03-28 01:59:46.188579 | orchestrator | 2026-03-28 01:59:46.188607 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-28 02:00:00.980106 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:00.980293 | orchestrator | 2026-03-28 02:00:00.980320 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-28 02:00:01.028768 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:00:01.028887 | orchestrator | 2026-03-28 02:00:01.028927 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-28 02:00:01.028940 | orchestrator | 2026-03-28 02:00:01.028953 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 02:00:02.916742 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:02.916844 | orchestrator | 2026-03-28 02:00:02.916861 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-28 02:00:03.044658 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-28 02:00:03.044786 | orchestrator | 2026-03-28 02:00:03.044809 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-28 02:00:03.107044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:00:03.107181 | orchestrator | 2026-03-28 02:00:03.107207 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-28 02:00:05.872044 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:05.872159 | orchestrator | 2026-03-28 02:00:05.872177 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-28 02:00:05.928101 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:05.928198 | orchestrator | 2026-03-28 02:00:05.928213 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-28 02:00:06.065195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-28 02:00:06.065374 | orchestrator | 2026-03-28 02:00:06.065391 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-28 02:00:08.812818 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-28 02:00:08.812936 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-28 02:00:08.812962 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-28 02:00:08.812983 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-28 02:00:08.813004 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-28 02:00:08.813023 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-28 02:00:08.813041 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-28 02:00:08.813060 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-28 02:00:08.813080 | orchestrator | 2026-03-28 02:00:08.813100 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-28 02:00:09.402949 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:09.403027 | orchestrator | 2026-03-28 02:00:09.403042 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-28 02:00:09.984031 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:09.984138 | orchestrator | 2026-03-28 02:00:09.984165 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-28 02:00:10.058669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-28 02:00:10.058749 | orchestrator | 2026-03-28 02:00:10.058763 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-28 02:00:11.257632 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-28 02:00:11.257737 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-28 02:00:11.257755 | orchestrator | 2026-03-28 02:00:11.257767 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-28 02:00:11.945618 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:11.945687 | orchestrator | 2026-03-28 02:00:11.945694 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-28 02:00:12.010683 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:00:12.010768 | orchestrator | 2026-03-28 02:00:12.010781 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-28 02:00:12.105234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-28 02:00:12.105366 | orchestrator | 2026-03-28 02:00:12.105378 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-28 02:00:12.751251 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:12.751397 | orchestrator | 2026-03-28 02:00:12.751414 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-28 02:00:12.814524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-28 02:00:12.814643 | orchestrator | 2026-03-28 02:00:12.814667 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-28 02:00:14.186893 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 02:00:14.186996 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 02:00:14.187013 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:14.187025 | orchestrator | 2026-03-28 02:00:14.187037 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-28 02:00:14.841943 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:14.842104 | orchestrator | 2026-03-28 02:00:14.842121 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-28 02:00:14.896578 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:00:14.896666 | orchestrator | 2026-03-28 02:00:14.896679 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-28 02:00:15.001253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-28 02:00:15.001399 | orchestrator | 2026-03-28 02:00:15.001416 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-28 02:00:15.537812 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:15.537926 | orchestrator | 2026-03-28 02:00:15.537957 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-28 02:00:15.971985 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:15.972108 | orchestrator | 2026-03-28 02:00:15.972125 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-28 02:00:17.278863 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-28 02:00:17.278978 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-28 02:00:17.278995 | orchestrator | 2026-03-28 02:00:17.279008 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-28 02:00:17.943488 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:17.943588 | orchestrator | 2026-03-28 02:00:17.943604 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-28 02:00:18.346243 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:18.346438 | orchestrator | 2026-03-28 02:00:18.346464 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-28 02:00:18.705201 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:18.705281 | orchestrator | 2026-03-28 02:00:18.705291 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-28 02:00:18.758297 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:00:18.758463 | orchestrator | 2026-03-28 02:00:18.758489 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-28 02:00:18.843011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-28 02:00:18.843128 | orchestrator | 2026-03-28 02:00:18.843139 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-28 02:00:18.887807 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:18.887882 | orchestrator | 2026-03-28 02:00:18.887891 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-28 02:00:20.927252 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-28 02:00:20.927404 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-28 02:00:20.927420 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-28 02:00:20.927431 | orchestrator | 2026-03-28 02:00:20.927442 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-28 02:00:21.659162 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:21.659302 | orchestrator | 2026-03-28 02:00:21.659423 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-28 02:00:22.428458 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:22.428560 | orchestrator | 2026-03-28 02:00:22.428573 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-28 02:00:23.182232 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:23.182414 | orchestrator | 2026-03-28 02:00:23.182434 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-28 02:00:23.259407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-28 02:00:23.259522 | orchestrator | 2026-03-28 02:00:23.259540 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-28 02:00:23.309423 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:23.309515 | orchestrator | 2026-03-28 02:00:23.309529 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-28 02:00:24.016088 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-28 02:00:24.016219 | orchestrator | 2026-03-28 02:00:24.016235 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-28 02:00:24.107112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-28 02:00:24.107208 | orchestrator | 2026-03-28 02:00:24.107217 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-28 02:00:24.854611 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:24.854743 | orchestrator | 2026-03-28 02:00:24.854759 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-28 02:00:25.485645 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:25.485775 | orchestrator | 2026-03-28 02:00:25.485793 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-28 02:00:25.544343 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:00:25.544483 | orchestrator | 2026-03-28 02:00:25.544495 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-28 02:00:25.604020 | orchestrator | ok: [testbed-manager] 2026-03-28 02:00:25.604172 | orchestrator | 2026-03-28 02:00:25.604198 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-28 02:00:26.468799 | orchestrator | changed: [testbed-manager] 2026-03-28 02:00:26.468943 | orchestrator | 2026-03-28 02:00:26.468964 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-28 02:01:40.080810 | orchestrator | changed: [testbed-manager] 2026-03-28 02:01:40.080921 | orchestrator | 2026-03-28 02:01:40.080935 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-28 02:01:41.088659 | orchestrator | ok: [testbed-manager] 2026-03-28 02:01:41.088771 | orchestrator | 2026-03-28 02:01:41.088797 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-28 02:01:41.146322 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:01:41.146442 | orchestrator | 2026-03-28 02:01:41.146465 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-28 02:01:46.982946 | orchestrator | changed: [testbed-manager] 2026-03-28 02:01:46.983043 | orchestrator | 2026-03-28 02:01:46.983054 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-28 02:01:47.044130 | orchestrator | ok: [testbed-manager] 2026-03-28 02:01:47.044217 | orchestrator | 2026-03-28 02:01:47.044229 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 02:01:47.044236 | orchestrator | 2026-03-28 02:01:47.044242 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-28 02:01:47.225002 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:01:47.225088 | orchestrator | 2026-03-28 02:01:47.225099 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-28 02:02:47.282012 | orchestrator | Pausing for 60 seconds 2026-03-28 02:02:47.282178 | orchestrator | changed: [testbed-manager] 2026-03-28 02:02:47.282197 | orchestrator | 2026-03-28 02:02:47.282210 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-28 02:02:50.484076 | orchestrator | changed: [testbed-manager] 2026-03-28 02:02:50.484205 | orchestrator | 2026-03-28 02:02:50.484229 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-28 02:03:52.657558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-28 02:03:52.657674 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-28 02:03:52.657712 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-28 02:03:52.657724 | orchestrator | changed: [testbed-manager] 2026-03-28 02:03:52.657737 | orchestrator | 2026-03-28 02:03:52.657749 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-28 02:04:03.743268 | orchestrator | changed: [testbed-manager] 2026-03-28 02:04:03.743388 | orchestrator | 2026-03-28 02:04:03.743405 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-28 02:04:03.828632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-28 02:04:03.828730 | orchestrator | 2026-03-28 02:04:03.828745 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 02:04:03.828758 | orchestrator | 2026-03-28 02:04:03.828770 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-28 02:04:03.882613 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:04:03.882710 | orchestrator | 2026-03-28 02:04:03.882729 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-28 02:04:03.965803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-28 02:04:03.965925 | orchestrator | 2026-03-28 02:04:03.965951 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-28 02:04:04.757794 | orchestrator | changed: [testbed-manager] 2026-03-28 02:04:04.757897 | orchestrator | 2026-03-28 02:04:04.757914 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-28 02:04:08.235439 | orchestrator | ok: [testbed-manager] 2026-03-28 02:04:08.235547 | orchestrator | 2026-03-28 02:04:08.235560 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-28 02:04:08.303482 | orchestrator | ok: [testbed-manager] => { 2026-03-28 02:04:08.303589 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-28 02:04:08.303608 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-28 02:04:08.303622 | orchestrator | "Checking running containers against expected versions...", 2026-03-28 02:04:08.303637 | orchestrator | "", 2026-03-28 02:04:08.303650 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-28 02:04:08.303662 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-28 02:04:08.303676 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.303690 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-28 02:04:08.303702 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.303716 | orchestrator | "", 2026-03-28 02:04:08.303728 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-28 02:04:08.303769 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-28 02:04:08.303783 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.303795 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-28 02:04:08.303807 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.303819 | orchestrator | "", 2026-03-28 02:04:08.303831 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-28 02:04:08.303844 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-28 02:04:08.303857 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.303869 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-28 02:04:08.303882 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.303894 | orchestrator | "", 2026-03-28 02:04:08.303907 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-28 02:04:08.303919 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-28 02:04:08.303931 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.303944 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-28 02:04:08.303955 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.303966 | orchestrator | "", 2026-03-28 02:04:08.303995 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-28 02:04:08.304008 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-28 02:04:08.304021 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304034 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-28 02:04:08.304048 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304060 | orchestrator | "", 2026-03-28 02:04:08.304072 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-28 02:04:08.304084 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304097 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304140 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304156 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304168 | orchestrator | "", 2026-03-28 02:04:08.304180 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-28 02:04:08.304193 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 02:04:08.304206 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304219 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 02:04:08.304233 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304245 | orchestrator | "", 2026-03-28 02:04:08.304258 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-28 02:04:08.304271 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 02:04:08.304284 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304297 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 02:04:08.304310 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304323 | orchestrator | "", 2026-03-28 02:04:08.304335 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-28 02:04:08.304348 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-28 02:04:08.304361 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304373 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-28 02:04:08.304385 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304397 | orchestrator | "", 2026-03-28 02:04:08.304409 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-28 02:04:08.304421 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 02:04:08.304433 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304444 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 02:04:08.304456 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304468 | orchestrator | "", 2026-03-28 02:04:08.304481 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-28 02:04:08.304494 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304521 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304533 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304545 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304557 | orchestrator | "", 2026-03-28 02:04:08.304569 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-28 02:04:08.304581 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304594 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304607 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304618 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304631 | orchestrator | "", 2026-03-28 02:04:08.304644 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-28 02:04:08.304655 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304667 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304680 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304693 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304705 | orchestrator | "", 2026-03-28 02:04:08.304714 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-28 02:04:08.304722 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304729 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304736 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304764 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304772 | orchestrator | "", 2026-03-28 02:04:08.304780 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-28 02:04:08.304787 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304804 | orchestrator | " Enabled: true", 2026-03-28 02:04:08.304812 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-28 02:04:08.304819 | orchestrator | " Status: ✅ MATCH", 2026-03-28 02:04:08.304827 | orchestrator | "", 2026-03-28 02:04:08.304834 | orchestrator | "=== Summary ===", 2026-03-28 02:04:08.304841 | orchestrator | "Errors (version mismatches): 0", 2026-03-28 02:04:08.304849 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-28 02:04:08.304856 | orchestrator | "", 2026-03-28 02:04:08.304864 | orchestrator | "✅ All running containers match expected versions!" 2026-03-28 02:04:08.304871 | orchestrator | ] 2026-03-28 02:04:08.304879 | orchestrator | } 2026-03-28 02:04:08.304886 | orchestrator | 2026-03-28 02:04:08.304894 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-28 02:04:08.354603 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:04:08.354698 | orchestrator | 2026-03-28 02:04:08.354712 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:04:08.354725 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-28 02:04:08.354737 | orchestrator | 2026-03-28 02:04:08.477680 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 02:04:08.477779 | orchestrator | + deactivate 2026-03-28 02:04:08.477794 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 02:04:08.477808 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 02:04:08.477819 | orchestrator | + export PATH 2026-03-28 02:04:08.477830 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 02:04:08.477842 | orchestrator | + '[' -n '' ']' 2026-03-28 02:04:08.477853 | orchestrator | + hash -r 2026-03-28 02:04:08.477864 | orchestrator | + '[' -n '' ']' 2026-03-28 02:04:08.477875 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 02:04:08.477885 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 02:04:08.477897 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 02:04:08.477908 | orchestrator | + unset -f deactivate 2026-03-28 02:04:08.477919 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-28 02:04:08.484212 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 02:04:08.484289 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 02:04:08.484344 | orchestrator | + local max_attempts=60 2026-03-28 02:04:08.484364 | orchestrator | + local name=ceph-ansible 2026-03-28 02:04:08.484383 | orchestrator | + local attempt_num=1 2026-03-28 02:04:08.484998 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:04:08.517089 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:04:08.517237 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 02:04:08.517262 | orchestrator | + local max_attempts=60 2026-03-28 02:04:08.517281 | orchestrator | + local name=kolla-ansible 2026-03-28 02:04:08.517313 | orchestrator | + local attempt_num=1 2026-03-28 02:04:08.518626 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 02:04:08.556617 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:04:08.556691 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 02:04:08.556702 | orchestrator | + local max_attempts=60 2026-03-28 02:04:08.556710 | orchestrator | + local name=osism-ansible 2026-03-28 02:04:08.556718 | orchestrator | + local attempt_num=1 2026-03-28 02:04:08.557802 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 02:04:08.591365 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:04:08.591457 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 02:04:08.591473 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 02:04:09.295742 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-28 02:04:09.488219 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-28 02:04:09.488315 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 02:04:09.488329 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 02:04:09.488340 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-28 02:04:09.488352 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-28 02:04:09.488383 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-28 02:04:09.488393 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-28 02:04:09.488403 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-28 02:04:09.488413 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-28 02:04:09.488423 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-28 02:04:09.488433 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-28 02:04:09.488443 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-28 02:04:09.488453 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 02:04:09.488485 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-28 02:04:09.488496 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-28 02:04:09.488506 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-28 02:04:09.494964 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 02:04:09.547773 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 02:04:09.547862 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-28 02:04:09.552926 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-28 02:04:21.803117 | orchestrator | 2026-03-28 02:04:21 | INFO  | Task 63aac0c1-ed52-442f-be5a-b2b27296618d (resolvconf) was prepared for execution. 2026-03-28 02:04:21.803342 | orchestrator | 2026-03-28 02:04:21 | INFO  | It takes a moment until task 63aac0c1-ed52-442f-be5a-b2b27296618d (resolvconf) has been started and output is visible here. 2026-03-28 02:04:36.473046 | orchestrator | 2026-03-28 02:04:36.473236 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-28 02:04:36.473269 | orchestrator | 2026-03-28 02:04:36.473289 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 02:04:36.473308 | orchestrator | Saturday 28 March 2026 02:04:26 +0000 (0:00:00.158) 0:00:00.158 ******** 2026-03-28 02:04:36.473328 | orchestrator | ok: [testbed-manager] 2026-03-28 02:04:36.473346 | orchestrator | 2026-03-28 02:04:36.473365 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 02:04:36.473384 | orchestrator | Saturday 28 March 2026 02:04:30 +0000 (0:00:04.039) 0:00:04.198 ******** 2026-03-28 02:04:36.473402 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:04:36.473422 | orchestrator | 2026-03-28 02:04:36.473440 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 02:04:36.473462 | orchestrator | Saturday 28 March 2026 02:04:30 +0000 (0:00:00.069) 0:00:04.267 ******** 2026-03-28 02:04:36.473483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-28 02:04:36.473506 | orchestrator | 2026-03-28 02:04:36.473529 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 02:04:36.473550 | orchestrator | Saturday 28 March 2026 02:04:30 +0000 (0:00:00.088) 0:00:04.356 ******** 2026-03-28 02:04:36.473590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:04:36.473613 | orchestrator | 2026-03-28 02:04:36.473633 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 02:04:36.473652 | orchestrator | Saturday 28 March 2026 02:04:30 +0000 (0:00:00.086) 0:00:04.443 ******** 2026-03-28 02:04:36.473671 | orchestrator | ok: [testbed-manager] 2026-03-28 02:04:36.473691 | orchestrator | 2026-03-28 02:04:36.473711 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 02:04:36.473728 | orchestrator | Saturday 28 March 2026 02:04:31 +0000 (0:00:01.187) 0:00:05.630 ******** 2026-03-28 02:04:36.473749 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:04:36.473770 | orchestrator | 2026-03-28 02:04:36.473791 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 02:04:36.473812 | orchestrator | Saturday 28 March 2026 02:04:31 +0000 (0:00:00.054) 0:00:05.685 ******** 2026-03-28 02:04:36.473860 | orchestrator | ok: [testbed-manager] 2026-03-28 02:04:36.473880 | orchestrator | 2026-03-28 02:04:36.473898 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 02:04:36.473917 | orchestrator | Saturday 28 March 2026 02:04:32 +0000 (0:00:00.527) 0:00:06.212 ******** 2026-03-28 02:04:36.473935 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:04:36.473954 | orchestrator | 2026-03-28 02:04:36.473973 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 02:04:36.473993 | orchestrator | Saturday 28 March 2026 02:04:32 +0000 (0:00:00.073) 0:00:06.286 ******** 2026-03-28 02:04:36.474011 | orchestrator | changed: [testbed-manager] 2026-03-28 02:04:36.474101 | orchestrator | 2026-03-28 02:04:36.474112 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 02:04:36.474123 | orchestrator | Saturday 28 March 2026 02:04:32 +0000 (0:00:00.545) 0:00:06.832 ******** 2026-03-28 02:04:36.474133 | orchestrator | changed: [testbed-manager] 2026-03-28 02:04:36.474144 | orchestrator | 2026-03-28 02:04:36.474155 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 02:04:36.474166 | orchestrator | Saturday 28 March 2026 02:04:33 +0000 (0:00:01.100) 0:00:07.933 ******** 2026-03-28 02:04:36.474177 | orchestrator | ok: [testbed-manager] 2026-03-28 02:04:36.474188 | orchestrator | 2026-03-28 02:04:36.474231 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 02:04:36.474247 | orchestrator | Saturday 28 March 2026 02:04:34 +0000 (0:00:01.051) 0:00:08.984 ******** 2026-03-28 02:04:36.474259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-28 02:04:36.474270 | orchestrator | 2026-03-28 02:04:36.474281 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 02:04:36.474291 | orchestrator | Saturday 28 March 2026 02:04:35 +0000 (0:00:00.081) 0:00:09.066 ******** 2026-03-28 02:04:36.474302 | orchestrator | changed: [testbed-manager] 2026-03-28 02:04:36.474313 | orchestrator | 2026-03-28 02:04:36.474323 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:04:36.474335 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:04:36.474346 | orchestrator | 2026-03-28 02:04:36.474357 | orchestrator | 2026-03-28 02:04:36.474367 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:04:36.474378 | orchestrator | Saturday 28 March 2026 02:04:36 +0000 (0:00:01.170) 0:00:10.237 ******** 2026-03-28 02:04:36.474389 | orchestrator | =============================================================================== 2026-03-28 02:04:36.474399 | orchestrator | Gathering Facts --------------------------------------------------------- 4.04s 2026-03-28 02:04:36.474410 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-03-28 02:04:36.474421 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2026-03-28 02:04:36.474431 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-03-28 02:04:36.474442 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.05s 2026-03-28 02:04:36.474452 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-28 02:04:36.474487 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-28 02:04:36.474498 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-28 02:04:36.474509 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-28 02:04:36.474520 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-28 02:04:36.474530 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-28 02:04:36.474541 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-28 02:04:36.474563 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-28 02:04:36.790810 | orchestrator | + osism apply sshconfig 2026-03-28 02:04:48.926925 | orchestrator | 2026-03-28 02:04:48 | INFO  | Task b090925c-d025-4bd3-9351-f683f5005ea1 (sshconfig) was prepared for execution. 2026-03-28 02:04:48.927031 | orchestrator | 2026-03-28 02:04:48 | INFO  | It takes a moment until task b090925c-d025-4bd3-9351-f683f5005ea1 (sshconfig) has been started and output is visible here. 2026-03-28 02:05:01.662077 | orchestrator | 2026-03-28 02:05:01.662206 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-28 02:05:01.662225 | orchestrator | 2026-03-28 02:05:01.662260 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-28 02:05:01.662324 | orchestrator | Saturday 28 March 2026 02:04:53 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-03-28 02:05:01.662337 | orchestrator | ok: [testbed-manager] 2026-03-28 02:05:01.662351 | orchestrator | 2026-03-28 02:05:01.662364 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-28 02:05:01.662376 | orchestrator | Saturday 28 March 2026 02:04:54 +0000 (0:00:00.600) 0:00:00.764 ******** 2026-03-28 02:05:01.662389 | orchestrator | changed: [testbed-manager] 2026-03-28 02:05:01.662402 | orchestrator | 2026-03-28 02:05:01.662416 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-28 02:05:01.662429 | orchestrator | Saturday 28 March 2026 02:04:54 +0000 (0:00:00.606) 0:00:01.371 ******** 2026-03-28 02:05:01.662441 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-28 02:05:01.662454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-28 02:05:01.662467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-28 02:05:01.662479 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-28 02:05:01.662491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-28 02:05:01.662503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-28 02:05:01.662515 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-28 02:05:01.662528 | orchestrator | 2026-03-28 02:05:01.662540 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-28 02:05:01.662553 | orchestrator | Saturday 28 March 2026 02:05:00 +0000 (0:00:06.022) 0:00:07.393 ******** 2026-03-28 02:05:01.662565 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:05:01.662578 | orchestrator | 2026-03-28 02:05:01.662590 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-28 02:05:01.662603 | orchestrator | Saturday 28 March 2026 02:05:00 +0000 (0:00:00.072) 0:00:07.466 ******** 2026-03-28 02:05:01.662615 | orchestrator | changed: [testbed-manager] 2026-03-28 02:05:01.662628 | orchestrator | 2026-03-28 02:05:01.662640 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:05:01.662654 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:05:01.662667 | orchestrator | 2026-03-28 02:05:01.662680 | orchestrator | 2026-03-28 02:05:01.662692 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:05:01.662705 | orchestrator | Saturday 28 March 2026 02:05:01 +0000 (0:00:00.603) 0:00:08.069 ******** 2026-03-28 02:05:01.662717 | orchestrator | =============================================================================== 2026-03-28 02:05:01.662730 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.02s 2026-03-28 02:05:01.662743 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.61s 2026-03-28 02:05:01.662755 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-03-28 02:05:01.662768 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2026-03-28 02:05:01.662780 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-28 02:05:01.982109 | orchestrator | + osism apply known-hosts 2026-03-28 02:05:14.110623 | orchestrator | 2026-03-28 02:05:14 | INFO  | Task 4e6d22e3-d659-4518-b945-ca3adf39147f (known-hosts) was prepared for execution. 2026-03-28 02:05:14.110733 | orchestrator | 2026-03-28 02:05:14 | INFO  | It takes a moment until task 4e6d22e3-d659-4518-b945-ca3adf39147f (known-hosts) has been started and output is visible here. 2026-03-28 02:05:31.709972 | orchestrator | 2026-03-28 02:05:31.710145 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-28 02:05:31.710165 | orchestrator | 2026-03-28 02:05:31.710176 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-28 02:05:31.710186 | orchestrator | Saturday 28 March 2026 02:05:18 +0000 (0:00:00.174) 0:00:00.174 ******** 2026-03-28 02:05:31.710197 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 02:05:31.710208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 02:05:31.710218 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 02:05:31.710228 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 02:05:31.710237 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 02:05:31.710247 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 02:05:31.710257 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 02:05:31.710266 | orchestrator | 2026-03-28 02:05:31.710276 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-28 02:05:31.710287 | orchestrator | Saturday 28 March 2026 02:05:24 +0000 (0:00:06.255) 0:00:06.429 ******** 2026-03-28 02:05:31.710299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 02:05:31.710312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 02:05:31.710321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 02:05:31.710331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 02:05:31.710341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 02:05:31.710384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 02:05:31.710395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 02:05:31.710406 | orchestrator | 2026-03-28 02:05:31.710415 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.710425 | orchestrator | Saturday 28 March 2026 02:05:24 +0000 (0:00:00.183) 0:00:06.613 ******** 2026-03-28 02:05:31.710436 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCpep9qtPpIfTIYB0P/VLSWQE6OjY3jF2bogmE/+7dvyw3Q4zTvTNgmFxbR72eqbFB5c9RVz2WWZsqkqlOUhdEE=) 2026-03-28 02:05:31.710456 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzNYOUd+ZEqTzqjOff6PStEVNhuuS3Lot/LQRr6W4Hk0rYJq3Q+47EeKguKVqgnsmO2+46qKP9GLOsZMSEjyJPEci0fhUPgI1vis9+OCzyGULAjhd1zBWYUXw4Z6ePnHuod4KhWuV/YFMgWRj+xAeV5ApYG4uYI1EFt0j1dj52Fnr5BwxVx1Pwj7C6O4j/b0FdcdrCVLuqzJjDpxohVwWL1SJMEquvkEwxw6QYeDJhDgvyFyyEzclM1kUsLxfsye0LJ9x0qJcOz7zLIu59ugIjl3FxzoFZsXQAuCT7UAGkVdgRbFRwobzn+cmtOyb0lvg10Wy/H2/nbmMRBCgbe81ZnBaVMjAjN1BNRwu1NC7S7VmhZXUWAE/gyvsDXx/L9ngCT8/wvDtW3PjYy0ZaoCobLGD/xHmAkQHJwBvAhxEoeq8pF4SVA6uEskGKDF5W6gAsz1YyXy91coW7y3W4m50CtAAy1MYKZe5c9PYLkggKss19TXX81DMwGacaxUwqeOk=) 2026-03-28 02:05:31.710491 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6XaoC421xBwbbhqi57bmIRo2isJT+6xB3LcUML9DK/) 2026-03-28 02:05:31.710505 | orchestrator | 2026-03-28 02:05:31.710516 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.710527 | orchestrator | Saturday 28 March 2026 02:05:26 +0000 (0:00:01.323) 0:00:07.936 ******** 2026-03-28 02:05:31.710557 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DuFyOn/L9cj67t0WjjP5xSQM/7WbeUSoIKBawMiKVFFIB0FEo4DfutL4r2aC+XrGVsGQWVmGWhltThNzKmXYj38bNS0mddUk5/XMKowOOTKQUaTDfCB2eHaWm0VljAjYNY0Z0RSdy23THYX0tX/l14S2Xtqd7Us0zVTo4807rQQ3Q9I8skkgnFfUc5YBZuJ3xqQ3WXuZkY8yDK9ay0vxRlfjEqn2Sj/OA/8NcqQmwQckWCmhNFdoUM1uOBnTcRDkCw1i+2aK3FKJrQK3joJ/H5R22fHEmF02MY7ZdFTGkg6/2XAehvL+Dq5zvXKK4qT78JSvCpy+8vB44Ktnfi2f/0H6Wk9bkwAmLSu602gGkbI2eaJ4Xisqu88wVyLMmKd74w6M+r0Ewbgx9RPcN2W30Il4JazboMeyKG7t5Hw4OuXjhQL7/x/2sooUQSIvGNMYpM2QJGfloylCj/x+EAOB2UiaMI1FtozAPJrj0Kd86HlIBdJtly6Gikm9/xYC0lU=) 2026-03-28 02:05:31.710570 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCLyEPFOn5Nd4GcoLE5EfI40kVxueR6UG+3EcReSMRJA/f3Qj144TVSI6SDxl4ivOeXKrA5NaJqSGsKuUkAMOXA=) 2026-03-28 02:05:31.710582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPzxAnH6liIN3pa3CYB+WYOCvnWTMBE/UM3GDgsLCz34) 2026-03-28 02:05:31.710594 | orchestrator | 2026-03-28 02:05:31.710605 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.710616 | orchestrator | Saturday 28 March 2026 02:05:27 +0000 (0:00:01.126) 0:00:09.062 ******** 2026-03-28 02:05:31.710629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICcU1sXhDckvL9BjHrzy359MIvlozlhQJ+jBvaMhFnO+) 2026-03-28 02:05:31.710640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnVlZjPfhq7b1HIh1Vmyz2o8U4/xlXwvZ2FZeeNDjS+1FIU47v2l/gHC6cP3kxXH7efYnYDVXk3N7VBGKDjXEvCTcvFHxbnyNj0+2/QymKlep4EY2Hw2RyKuePECJEsi/q90HAgSjolvpj+1bjryndkkErLC+ETlzHZLHb2SSGVP97LeBI0CIhQM+aMf2WMKZ4076FVvO4dBlxNE5KX2fwjWsSUfTqneuCqzLpK90IE1gZG81n95SPTOuxCgqBscUoYM/pDtHb0/lQwG9wgirNtprOHKY1jtMip6j1XWP1lTy9b8Xfe6yEJyvcFljCb3P5oXNgx1Y5jorjR9EM7Hsvu6Lfvc9601g4VPuVOBr1X0JlYtIupIR6Fu1Yt2COzVke5p2buT2ljhwDUyZAAvYEdzhhe3iFT1ZamvdjbQ/qJsbgQyswO/KmaY499l0PRMOeQKw1NeyyzrJa2QXbyQx/xNQ+QjXYAsXTv1N3guW/nELq5FDAQ4PDrEKqZQbdzKc=) 2026-03-28 02:05:31.710652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAYy6V2C97AEEEU1BCeqMgxUTMDul4rjy/lth0l/iEUr3PovxZMmo8UPaFFXdURbsL9MFi+abF8XU/I4wylaN+M=) 2026-03-28 02:05:31.710664 | orchestrator | 2026-03-28 02:05:31.710675 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.710686 | orchestrator | Saturday 28 March 2026 02:05:28 +0000 (0:00:01.126) 0:00:10.189 ******** 2026-03-28 02:05:31.710696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhtx24hZNjY7wjm0AZIubswQjkWvuLHSTszxLrfkrMQuy9BuWVUq/bKrKqvgvk2kKn8Deat9RflnVskyPY4kor9xinB1sd6aL7ubhoDyok3TDaqo0l8YzPXfbP0XEzZ7gxoXvZ8Y/FwbFE7o8FnRJgjTmANId8HGukFLszGsLorUrBjQn3WPk3335WcQ+uReVn6ZsbHqNxcGG6uUxFqaxjYnmS+2JeTd/5r2fafb5PhlM9PCk7jFDqcXSTE4GdQ5L/SqSfKpNwWnvAHHLJyec3rJN1nHrxrc5xZPa4uc4CaR1I4zDPu+rJzwNHy4IgL6h9+xKU5Wrx+jfAMDRyIPVvNodiUdk2UFobEHRvagBttu5mPeiVLIWwZM4UDIlNlNgDfci6zMV7nxzy4y2gBxvbRegNyzhdYAJQSH/VGecmjKjdutTqEOvziOuvz8cUBViQxSoWt4PWnfnMsk7/Ejy0BcNdh91+e3vmgj4i0eAnJXe+zPpl+T4kSXfZf95bfAE=) 2026-03-28 02:05:31.710715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC58PWpO8CM9OL21wfCigdPkU7QLDJGp+5IW2CxvlNgi3QT99d6OmNc9P1Dh475O1x4FfCuvyRtn7Tw1vf67p4A=) 2026-03-28 02:05:31.710749 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN6xXnUzLnmm1YxaOtqzR8bkvM4RpLV0zHORsQa5UVOU) 2026-03-28 02:05:31.710759 | orchestrator | 2026-03-28 02:05:31.710856 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.710870 | orchestrator | Saturday 28 March 2026 02:05:29 +0000 (0:00:01.059) 0:00:11.248 ******** 2026-03-28 02:05:31.710975 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXL/IWwULTd6Vri2H0/CXmBDIMTDNAcEyWt50CySqiCaN0yw50zmx7TdOvfz3HSJGbDKlFaAjq2gAWrST3CTBAOkfM4bao+nBsL21X85WV5d1PgD2JY2D/kTYJCA8HK0o1ImEQPIXxT84EaU0ESVZd5w79ZPj21vZYspcaUuqUnvRvPQ6wfa4ROAyZzVm5riqM3ypPnxZf8GFE0LzsCi7lzqYA9bhrb2z5MimtGq/xeDQPlipM6Ovof567SSX6OhQHR6C32znhiTiwiAR1CWyT309okkgVyFmBl8rGHSV1CerPM8OMR46sFRqzIQr4DVUgY5VeKAYBaQAGvnmaAnsI9R7sYlMvpAywVmPmTggSKqZKL6e0WNPULDkv6c8jRKNLYCXefoD7BAcAeVmJVg2yIiwcQYh8HdlbClPhc9nKftZIXYNm4inxwGZ8LxPqx6xAkVJEkquQf1eK9OZv0huM6RDAp3nxeGQ9F+FD/oplKEbwjCLPz8etmPOH4PBnzKE=) 2026-03-28 02:05:31.710998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFr/r7YuaZlUfxQKVWFrEMMMDMStE7t2223ZoJ3YbdykRDEO7Yhwvov2NTPY6jphFUvx8w+lW2+vhDQhmmJ2Uq4=) 2026-03-28 02:05:31.711015 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEr+nDUHce6YCsrKzlM9dZVJ0P46AAnLWiFMKOzMMhSx) 2026-03-28 02:05:31.711032 | orchestrator | 2026-03-28 02:05:31.711048 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:31.711065 | orchestrator | Saturday 28 March 2026 02:05:30 +0000 (0:00:01.140) 0:00:12.389 ******** 2026-03-28 02:05:31.711099 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH+qWWMxbFC6Ku3AzG7ZkpfD4hkJc7aHVamhsuLAc70Q+fnVzsIX8wycKx0zJrFWR8s3Xudn96TqiZdhY5Vs1YqxTvpKgtpblmeRq933ulvvxSBYi5iKOvQ67V2KvcBa4Oj634V0ngo+5UNUAzhMGJLNtvcxV9qZh4RSeY+DR+XvwplymOARnZL3pyL4yVKZsJY73wqxsb4VoIq3cLP/mCmAB093WoNFZ1mPQLemeOUsaZlGh1L48mioWuSXCxmN+xUAq2ru5x4n/KK7hl0CM6j/vOzAqD2/eZJPULNQ4+j20eSEYdmV+EewdVrsqab6MadUOVjeIKkOqbyYeMVi2bBjuNXGw/JMiCiKn1FTIVmyOpNkMY/kj3cm1s26fE6AQJvxSn3Sip0gYZq6dMPbex4YyNEwx+YYHCxDblOZYD2/VwepvYZL7V+RdNJnRsScd7NuO/y59DoVpMyNTT0JkPkfhNZJGAO/2AfFLlWHy0lLo1sgXA8GJ0nV2eWpLXVQc=) 2026-03-28 02:05:43.111313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIICy5rQn2U2aFE1E0rPvZIr+OShE1L3oPByzkVAaFJJH) 2026-03-28 02:05:43.111510 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNHsg+ddVy4e38ZdO6QeHrt7SVmW17j7xLH+W8asoWE/4/0vEYh9KIYV35L1j2afLtnZ2BDwizxfOEQPl/21YJ8=) 2026-03-28 02:05:43.111541 | orchestrator | 2026-03-28 02:05:43.111564 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:43.111585 | orchestrator | Saturday 28 March 2026 02:05:31 +0000 (0:00:01.109) 0:00:13.499 ******** 2026-03-28 02:05:43.111605 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoOGuJuaUY+lBlv+wSvC9F/i0xbvdsTC7uWtYyb8LkLZBewrVZv5eekTkXbm3rdW3sdTqo9WHZZ1f2vvT3NiLU=) 2026-03-28 02:05:43.111629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRDXDqxH3YCoVSL6AYzdvxW3YvDDJc1BpXz8PpaPCcrTiIEHITj4dzhRMnyW717ht8JaR1nCcxXoHtM3miQZ4JVas4E0ofbyDi2Dg9bFZJNUmZ3HHdJLX5kvLBwM7zQPA1xGC4ILcVr7PTQ2CqgMRvKHk1kC5rL+mX/sgYLP/1tAL+P6kg3XtzksE+KkPV1+rdJy8Yg7byiqQDwkiBXn1ZeFq68mCiCj9x9gpUPmQPfW9M3n99W+2EyhI5rUTOR7KFO73qzr/jDq1XTg5oM/kOCOrnlr37lRu7QbA1qhdROF2lUYGYeCuV2CzVsKl/8rRexSkoUZIoL0gMkudyknoFJTFizhMdwSRThRQvh5iZRAwlJcDYMOem86+DrWtSRQhMoE3/HvAtdb6TFkcr1AVKYq5IShahIN12IWE8VHFPe5jKHczpdz1qF8VfawNlwrlFw7zBnGT052MUuh6m27KgHdfB5CfBAXWqEvucBBYfB7NONbFjCTVAV1As9FKspRE=) 2026-03-28 02:05:43.111677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhS5+cqnv1aIa1lcFSbduq3wGGXW0sSI+va22zIjiAI) 2026-03-28 02:05:43.111698 | orchestrator | 2026-03-28 02:05:43.111719 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-28 02:05:43.111738 | orchestrator | Saturday 28 March 2026 02:05:32 +0000 (0:00:01.099) 0:00:14.598 ******** 2026-03-28 02:05:43.111758 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 02:05:43.111778 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 02:05:43.111799 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 02:05:43.111821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 02:05:43.111842 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 02:05:43.111866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 02:05:43.111888 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 02:05:43.111909 | orchestrator | 2026-03-28 02:05:43.111933 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-28 02:05:43.111956 | orchestrator | Saturday 28 March 2026 02:05:38 +0000 (0:00:05.445) 0:00:20.044 ******** 2026-03-28 02:05:43.111979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 02:05:43.112002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 02:05:43.112024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 02:05:43.112046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 02:05:43.112066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 02:05:43.112085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 02:05:43.112098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 02:05:43.112110 | orchestrator | 2026-03-28 02:05:43.112123 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:43.112136 | orchestrator | Saturday 28 March 2026 02:05:38 +0000 (0:00:00.198) 0:00:20.242 ******** 2026-03-28 02:05:43.112149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6XaoC421xBwbbhqi57bmIRo2isJT+6xB3LcUML9DK/) 2026-03-28 02:05:43.112190 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzNYOUd+ZEqTzqjOff6PStEVNhuuS3Lot/LQRr6W4Hk0rYJq3Q+47EeKguKVqgnsmO2+46qKP9GLOsZMSEjyJPEci0fhUPgI1vis9+OCzyGULAjhd1zBWYUXw4Z6ePnHuod4KhWuV/YFMgWRj+xAeV5ApYG4uYI1EFt0j1dj52Fnr5BwxVx1Pwj7C6O4j/b0FdcdrCVLuqzJjDpxohVwWL1SJMEquvkEwxw6QYeDJhDgvyFyyEzclM1kUsLxfsye0LJ9x0qJcOz7zLIu59ugIjl3FxzoFZsXQAuCT7UAGkVdgRbFRwobzn+cmtOyb0lvg10Wy/H2/nbmMRBCgbe81ZnBaVMjAjN1BNRwu1NC7S7VmhZXUWAE/gyvsDXx/L9ngCT8/wvDtW3PjYy0ZaoCobLGD/xHmAkQHJwBvAhxEoeq8pF4SVA6uEskGKDF5W6gAsz1YyXy91coW7y3W4m50CtAAy1MYKZe5c9PYLkggKss19TXX81DMwGacaxUwqeOk=) 2026-03-28 02:05:43.112212 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCpep9qtPpIfTIYB0P/VLSWQE6OjY3jF2bogmE/+7dvyw3Q4zTvTNgmFxbR72eqbFB5c9RVz2WWZsqkqlOUhdEE=) 2026-03-28 02:05:43.112235 | orchestrator | 2026-03-28 02:05:43.112247 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:43.112258 | orchestrator | Saturday 28 March 2026 02:05:39 +0000 (0:00:01.172) 0:00:21.414 ******** 2026-03-28 02:05:43.112269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPzxAnH6liIN3pa3CYB+WYOCvnWTMBE/UM3GDgsLCz34) 2026-03-28 02:05:43.112281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DuFyOn/L9cj67t0WjjP5xSQM/7WbeUSoIKBawMiKVFFIB0FEo4DfutL4r2aC+XrGVsGQWVmGWhltThNzKmXYj38bNS0mddUk5/XMKowOOTKQUaTDfCB2eHaWm0VljAjYNY0Z0RSdy23THYX0tX/l14S2Xtqd7Us0zVTo4807rQQ3Q9I8skkgnFfUc5YBZuJ3xqQ3WXuZkY8yDK9ay0vxRlfjEqn2Sj/OA/8NcqQmwQckWCmhNFdoUM1uOBnTcRDkCw1i+2aK3FKJrQK3joJ/H5R22fHEmF02MY7ZdFTGkg6/2XAehvL+Dq5zvXKK4qT78JSvCpy+8vB44Ktnfi2f/0H6Wk9bkwAmLSu602gGkbI2eaJ4Xisqu88wVyLMmKd74w6M+r0Ewbgx9RPcN2W30Il4JazboMeyKG7t5Hw4OuXjhQL7/x/2sooUQSIvGNMYpM2QJGfloylCj/x+EAOB2UiaMI1FtozAPJrj0Kd86HlIBdJtly6Gikm9/xYC0lU=) 2026-03-28 02:05:43.112293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCLyEPFOn5Nd4GcoLE5EfI40kVxueR6UG+3EcReSMRJA/f3Qj144TVSI6SDxl4ivOeXKrA5NaJqSGsKuUkAMOXA=) 2026-03-28 02:05:43.112304 | orchestrator | 2026-03-28 02:05:43.112315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:43.112326 | orchestrator | Saturday 28 March 2026 02:05:40 +0000 (0:00:01.179) 0:00:22.593 ******** 2026-03-28 02:05:43.112337 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAYy6V2C97AEEEU1BCeqMgxUTMDul4rjy/lth0l/iEUr3PovxZMmo8UPaFFXdURbsL9MFi+abF8XU/I4wylaN+M=) 2026-03-28 02:05:43.112348 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnVlZjPfhq7b1HIh1Vmyz2o8U4/xlXwvZ2FZeeNDjS+1FIU47v2l/gHC6cP3kxXH7efYnYDVXk3N7VBGKDjXEvCTcvFHxbnyNj0+2/QymKlep4EY2Hw2RyKuePECJEsi/q90HAgSjolvpj+1bjryndkkErLC+ETlzHZLHb2SSGVP97LeBI0CIhQM+aMf2WMKZ4076FVvO4dBlxNE5KX2fwjWsSUfTqneuCqzLpK90IE1gZG81n95SPTOuxCgqBscUoYM/pDtHb0/lQwG9wgirNtprOHKY1jtMip6j1XWP1lTy9b8Xfe6yEJyvcFljCb3P5oXNgx1Y5jorjR9EM7Hsvu6Lfvc9601g4VPuVOBr1X0JlYtIupIR6Fu1Yt2COzVke5p2buT2ljhwDUyZAAvYEdzhhe3iFT1ZamvdjbQ/qJsbgQyswO/KmaY499l0PRMOeQKw1NeyyzrJa2QXbyQx/xNQ+QjXYAsXTv1N3guW/nELq5FDAQ4PDrEKqZQbdzKc=) 2026-03-28 02:05:43.112360 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICcU1sXhDckvL9BjHrzy359MIvlozlhQJ+jBvaMhFnO+) 2026-03-28 02:05:43.112370 | orchestrator | 2026-03-28 02:05:43.112414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:43.112468 | orchestrator | Saturday 28 March 2026 02:05:41 +0000 (0:00:01.134) 0:00:23.728 ******** 2026-03-28 02:05:43.112484 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC58PWpO8CM9OL21wfCigdPkU7QLDJGp+5IW2CxvlNgi3QT99d6OmNc9P1Dh475O1x4FfCuvyRtn7Tw1vf67p4A=) 2026-03-28 02:05:43.112496 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhtx24hZNjY7wjm0AZIubswQjkWvuLHSTszxLrfkrMQuy9BuWVUq/bKrKqvgvk2kKn8Deat9RflnVskyPY4kor9xinB1sd6aL7ubhoDyok3TDaqo0l8YzPXfbP0XEzZ7gxoXvZ8Y/FwbFE7o8FnRJgjTmANId8HGukFLszGsLorUrBjQn3WPk3335WcQ+uReVn6ZsbHqNxcGG6uUxFqaxjYnmS+2JeTd/5r2fafb5PhlM9PCk7jFDqcXSTE4GdQ5L/SqSfKpNwWnvAHHLJyec3rJN1nHrxrc5xZPa4uc4CaR1I4zDPu+rJzwNHy4IgL6h9+xKU5Wrx+jfAMDRyIPVvNodiUdk2UFobEHRvagBttu5mPeiVLIWwZM4UDIlNlNgDfci6zMV7nxzy4y2gBxvbRegNyzhdYAJQSH/VGecmjKjdutTqEOvziOuvz8cUBViQxSoWt4PWnfnMsk7/Ejy0BcNdh91+e3vmgj4i0eAnJXe+zPpl+T4kSXfZf95bfAE=) 2026-03-28 02:05:43.112523 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN6xXnUzLnmm1YxaOtqzR8bkvM4RpLV0zHORsQa5UVOU) 2026-03-28 02:05:47.709896 | orchestrator | 2026-03-28 02:05:47.710142 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:47.710180 | orchestrator | Saturday 28 March 2026 02:05:43 +0000 (0:00:01.165) 0:00:24.894 ******** 2026-03-28 02:05:47.710203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXL/IWwULTd6Vri2H0/CXmBDIMTDNAcEyWt50CySqiCaN0yw50zmx7TdOvfz3HSJGbDKlFaAjq2gAWrST3CTBAOkfM4bao+nBsL21X85WV5d1PgD2JY2D/kTYJCA8HK0o1ImEQPIXxT84EaU0ESVZd5w79ZPj21vZYspcaUuqUnvRvPQ6wfa4ROAyZzVm5riqM3ypPnxZf8GFE0LzsCi7lzqYA9bhrb2z5MimtGq/xeDQPlipM6Ovof567SSX6OhQHR6C32znhiTiwiAR1CWyT309okkgVyFmBl8rGHSV1CerPM8OMR46sFRqzIQr4DVUgY5VeKAYBaQAGvnmaAnsI9R7sYlMvpAywVmPmTggSKqZKL6e0WNPULDkv6c8jRKNLYCXefoD7BAcAeVmJVg2yIiwcQYh8HdlbClPhc9nKftZIXYNm4inxwGZ8LxPqx6xAkVJEkquQf1eK9OZv0huM6RDAp3nxeGQ9F+FD/oplKEbwjCLPz8etmPOH4PBnzKE=) 2026-03-28 02:05:47.710225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEr+nDUHce6YCsrKzlM9dZVJ0P46AAnLWiFMKOzMMhSx) 2026-03-28 02:05:47.710239 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFr/r7YuaZlUfxQKVWFrEMMMDMStE7t2223ZoJ3YbdykRDEO7Yhwvov2NTPY6jphFUvx8w+lW2+vhDQhmmJ2Uq4=) 2026-03-28 02:05:47.710252 | orchestrator | 2026-03-28 02:05:47.710264 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:47.710275 | orchestrator | Saturday 28 March 2026 02:05:44 +0000 (0:00:01.140) 0:00:26.034 ******** 2026-03-28 02:05:47.710286 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH+qWWMxbFC6Ku3AzG7ZkpfD4hkJc7aHVamhsuLAc70Q+fnVzsIX8wycKx0zJrFWR8s3Xudn96TqiZdhY5Vs1YqxTvpKgtpblmeRq933ulvvxSBYi5iKOvQ67V2KvcBa4Oj634V0ngo+5UNUAzhMGJLNtvcxV9qZh4RSeY+DR+XvwplymOARnZL3pyL4yVKZsJY73wqxsb4VoIq3cLP/mCmAB093WoNFZ1mPQLemeOUsaZlGh1L48mioWuSXCxmN+xUAq2ru5x4n/KK7hl0CM6j/vOzAqD2/eZJPULNQ4+j20eSEYdmV+EewdVrsqab6MadUOVjeIKkOqbyYeMVi2bBjuNXGw/JMiCiKn1FTIVmyOpNkMY/kj3cm1s26fE6AQJvxSn3Sip0gYZq6dMPbex4YyNEwx+YYHCxDblOZYD2/VwepvYZL7V+RdNJnRsScd7NuO/y59DoVpMyNTT0JkPkfhNZJGAO/2AfFLlWHy0lLo1sgXA8GJ0nV2eWpLXVQc=) 2026-03-28 02:05:47.710298 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIICy5rQn2U2aFE1E0rPvZIr+OShE1L3oPByzkVAaFJJH) 2026-03-28 02:05:47.710309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNHsg+ddVy4e38ZdO6QeHrt7SVmW17j7xLH+W8asoWE/4/0vEYh9KIYV35L1j2afLtnZ2BDwizxfOEQPl/21YJ8=) 2026-03-28 02:05:47.710320 | orchestrator | 2026-03-28 02:05:47.710331 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 02:05:47.710342 | orchestrator | Saturday 28 March 2026 02:05:45 +0000 (0:00:01.130) 0:00:27.165 ******** 2026-03-28 02:05:47.710353 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhS5+cqnv1aIa1lcFSbduq3wGGXW0sSI+va22zIjiAI) 2026-03-28 02:05:47.710385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRDXDqxH3YCoVSL6AYzdvxW3YvDDJc1BpXz8PpaPCcrTiIEHITj4dzhRMnyW717ht8JaR1nCcxXoHtM3miQZ4JVas4E0ofbyDi2Dg9bFZJNUmZ3HHdJLX5kvLBwM7zQPA1xGC4ILcVr7PTQ2CqgMRvKHk1kC5rL+mX/sgYLP/1tAL+P6kg3XtzksE+KkPV1+rdJy8Yg7byiqQDwkiBXn1ZeFq68mCiCj9x9gpUPmQPfW9M3n99W+2EyhI5rUTOR7KFO73qzr/jDq1XTg5oM/kOCOrnlr37lRu7QbA1qhdROF2lUYGYeCuV2CzVsKl/8rRexSkoUZIoL0gMkudyknoFJTFizhMdwSRThRQvh5iZRAwlJcDYMOem86+DrWtSRQhMoE3/HvAtdb6TFkcr1AVKYq5IShahIN12IWE8VHFPe5jKHczpdz1qF8VfawNlwrlFw7zBnGT052MUuh6m27KgHdfB5CfBAXWqEvucBBYfB7NONbFjCTVAV1As9FKspRE=) 2026-03-28 02:05:47.710430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoOGuJuaUY+lBlv+wSvC9F/i0xbvdsTC7uWtYyb8LkLZBewrVZv5eekTkXbm3rdW3sdTqo9WHZZ1f2vvT3NiLU=) 2026-03-28 02:05:47.710445 | orchestrator | 2026-03-28 02:05:47.710458 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-28 02:05:47.710495 | orchestrator | Saturday 28 March 2026 02:05:46 +0000 (0:00:01.063) 0:00:28.228 ******** 2026-03-28 02:05:47.710510 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 02:05:47.710523 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 02:05:47.710534 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 02:05:47.710547 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 02:05:47.710559 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 02:05:47.710572 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 02:05:47.710584 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 02:05:47.710596 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:05:47.710609 | orchestrator | 2026-03-28 02:05:47.710640 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-28 02:05:47.710652 | orchestrator | Saturday 28 March 2026 02:05:46 +0000 (0:00:00.178) 0:00:28.406 ******** 2026-03-28 02:05:47.710665 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:05:47.710677 | orchestrator | 2026-03-28 02:05:47.710690 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-28 02:05:47.710702 | orchestrator | Saturday 28 March 2026 02:05:46 +0000 (0:00:00.047) 0:00:28.453 ******** 2026-03-28 02:05:47.710720 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:05:47.710733 | orchestrator | 2026-03-28 02:05:47.710745 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-28 02:05:47.710757 | orchestrator | Saturday 28 March 2026 02:05:46 +0000 (0:00:00.057) 0:00:28.511 ******** 2026-03-28 02:05:47.710769 | orchestrator | changed: [testbed-manager] 2026-03-28 02:05:47.710782 | orchestrator | 2026-03-28 02:05:47.710794 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:05:47.710805 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:05:47.710817 | orchestrator | 2026-03-28 02:05:47.710828 | orchestrator | 2026-03-28 02:05:47.710838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:05:47.710849 | orchestrator | Saturday 28 March 2026 02:05:47 +0000 (0:00:00.765) 0:00:29.277 ******** 2026-03-28 02:05:47.710859 | orchestrator | =============================================================================== 2026-03-28 02:05:47.710870 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.26s 2026-03-28 02:05:47.710881 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2026-03-28 02:05:47.710892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-03-28 02:05:47.710903 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-28 02:05:47.710913 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-28 02:05:47.710924 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-28 02:05:47.710935 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-28 02:05:47.710945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-28 02:05:47.710956 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 02:05:47.710966 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 02:05:47.710977 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 02:05:47.710989 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 02:05:47.711008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-28 02:05:47.711026 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-28 02:05:47.711054 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 02:05:47.711074 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-28 02:05:47.711092 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.77s 2026-03-28 02:05:47.711110 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-03-28 02:05:47.711123 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-28 02:05:47.711134 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-28 02:05:48.029444 | orchestrator | + osism apply squid 2026-03-28 02:06:00.239063 | orchestrator | 2026-03-28 02:06:00 | INFO  | Task 361d261c-7094-4483-9aea-7436fcbae524 (squid) was prepared for execution. 2026-03-28 02:06:00.239170 | orchestrator | 2026-03-28 02:06:00 | INFO  | It takes a moment until task 361d261c-7094-4483-9aea-7436fcbae524 (squid) has been started and output is visible here. 2026-03-28 02:07:54.873567 | orchestrator | 2026-03-28 02:07:54.873667 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-28 02:07:54.873681 | orchestrator | 2026-03-28 02:07:54.873738 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-28 02:07:54.873756 | orchestrator | Saturday 28 March 2026 02:06:04 +0000 (0:00:00.169) 0:00:00.169 ******** 2026-03-28 02:07:54.873779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:07:54.873798 | orchestrator | 2026-03-28 02:07:54.873814 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-28 02:07:54.873830 | orchestrator | Saturday 28 March 2026 02:06:04 +0000 (0:00:00.118) 0:00:00.287 ******** 2026-03-28 02:07:54.873845 | orchestrator | ok: [testbed-manager] 2026-03-28 02:07:54.873862 | orchestrator | 2026-03-28 02:07:54.873879 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-28 02:07:54.873894 | orchestrator | Saturday 28 March 2026 02:06:06 +0000 (0:00:01.634) 0:00:01.922 ******** 2026-03-28 02:07:54.873911 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-28 02:07:54.873929 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-28 02:07:54.873945 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-28 02:07:54.873956 | orchestrator | 2026-03-28 02:07:54.873965 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-28 02:07:54.873974 | orchestrator | Saturday 28 March 2026 02:06:07 +0000 (0:00:01.181) 0:00:03.103 ******** 2026-03-28 02:07:54.873983 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-28 02:07:54.873992 | orchestrator | 2026-03-28 02:07:54.874000 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-28 02:07:54.874010 | orchestrator | Saturday 28 March 2026 02:06:08 +0000 (0:00:01.119) 0:00:04.222 ******** 2026-03-28 02:07:54.874067 | orchestrator | ok: [testbed-manager] 2026-03-28 02:07:54.874077 | orchestrator | 2026-03-28 02:07:54.874086 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-28 02:07:54.874096 | orchestrator | Saturday 28 March 2026 02:06:09 +0000 (0:00:00.368) 0:00:04.591 ******** 2026-03-28 02:07:54.874105 | orchestrator | changed: [testbed-manager] 2026-03-28 02:07:54.874115 | orchestrator | 2026-03-28 02:07:54.874125 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-28 02:07:54.874136 | orchestrator | Saturday 28 March 2026 02:06:10 +0000 (0:00:00.949) 0:00:05.540 ******** 2026-03-28 02:07:54.874147 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-28 02:07:54.874162 | orchestrator | ok: [testbed-manager] 2026-03-28 02:07:54.874172 | orchestrator | 2026-03-28 02:07:54.874182 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-28 02:07:54.874216 | orchestrator | Saturday 28 March 2026 02:06:41 +0000 (0:00:31.720) 0:00:37.261 ******** 2026-03-28 02:07:54.874226 | orchestrator | changed: [testbed-manager] 2026-03-28 02:07:54.874235 | orchestrator | 2026-03-28 02:07:54.874243 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-28 02:07:54.874252 | orchestrator | Saturday 28 March 2026 02:06:53 +0000 (0:00:12.021) 0:00:49.283 ******** 2026-03-28 02:07:54.874261 | orchestrator | Pausing for 60 seconds 2026-03-28 02:07:54.874270 | orchestrator | changed: [testbed-manager] 2026-03-28 02:07:54.874279 | orchestrator | 2026-03-28 02:07:54.874287 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-28 02:07:54.874296 | orchestrator | Saturday 28 March 2026 02:07:53 +0000 (0:01:00.097) 0:01:49.381 ******** 2026-03-28 02:07:54.874304 | orchestrator | ok: [testbed-manager] 2026-03-28 02:07:54.874313 | orchestrator | 2026-03-28 02:07:54.874321 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-28 02:07:54.874330 | orchestrator | Saturday 28 March 2026 02:07:53 +0000 (0:00:00.073) 0:01:49.454 ******** 2026-03-28 02:07:54.874339 | orchestrator | changed: [testbed-manager] 2026-03-28 02:07:54.874347 | orchestrator | 2026-03-28 02:07:54.874356 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:07:54.874365 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:07:54.874373 | orchestrator | 2026-03-28 02:07:54.874382 | orchestrator | 2026-03-28 02:07:54.874390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:07:54.874399 | orchestrator | Saturday 28 March 2026 02:07:54 +0000 (0:00:00.663) 0:01:50.117 ******** 2026-03-28 02:07:54.874408 | orchestrator | =============================================================================== 2026-03-28 02:07:54.874416 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-28 02:07:54.874425 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.72s 2026-03-28 02:07:54.874449 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.02s 2026-03-28 02:07:54.874458 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2026-03-28 02:07:54.874467 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-03-28 02:07:54.874475 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-03-28 02:07:54.874484 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-03-28 02:07:54.874492 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-03-28 02:07:54.874501 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-03-28 02:07:54.874509 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2026-03-28 02:07:54.874518 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-28 02:07:55.163684 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-28 02:07:55.163826 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 02:07:55.223450 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 02:07:55.223674 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-28 02:07:55.231618 | orchestrator | + set -e 2026-03-28 02:07:55.232022 | orchestrator | + NAMESPACE=kolla/release 2026-03-28 02:07:55.232054 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 02:07:55.238680 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-28 02:07:55.308319 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-28 02:07:55.308882 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-28 02:08:07.363878 | orchestrator | 2026-03-28 02:08:07 | INFO  | Task 8f218601-a858-41a4-a955-b6fe705e46cf (operator) was prepared for execution. 2026-03-28 02:08:07.363998 | orchestrator | 2026-03-28 02:08:07 | INFO  | It takes a moment until task 8f218601-a858-41a4-a955-b6fe705e46cf (operator) has been started and output is visible here. 2026-03-28 02:08:22.977046 | orchestrator | 2026-03-28 02:08:22.977167 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-28 02:08:22.977183 | orchestrator | 2026-03-28 02:08:22.977195 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 02:08:22.977207 | orchestrator | Saturday 28 March 2026 02:08:11 +0000 (0:00:00.143) 0:00:00.143 ******** 2026-03-28 02:08:22.977218 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:08:22.977230 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:08:22.977241 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:08:22.977251 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:08:22.977262 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:08:22.977272 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:08:22.977283 | orchestrator | 2026-03-28 02:08:22.977302 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-28 02:08:22.977320 | orchestrator | Saturday 28 March 2026 02:08:14 +0000 (0:00:03.214) 0:00:03.358 ******** 2026-03-28 02:08:22.977338 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:08:22.977355 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:08:22.977372 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:08:22.977409 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:08:22.977431 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:08:22.977448 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:08:22.977467 | orchestrator | 2026-03-28 02:08:22.977479 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-28 02:08:22.977490 | orchestrator | 2026-03-28 02:08:22.977501 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 02:08:22.977512 | orchestrator | Saturday 28 March 2026 02:08:15 +0000 (0:00:00.761) 0:00:04.119 ******** 2026-03-28 02:08:22.977523 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:08:22.977534 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:08:22.977545 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:08:22.977555 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:08:22.977566 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:08:22.977578 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:08:22.977589 | orchestrator | 2026-03-28 02:08:22.977599 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 02:08:22.977610 | orchestrator | Saturday 28 March 2026 02:08:15 +0000 (0:00:00.171) 0:00:04.291 ******** 2026-03-28 02:08:22.977621 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:08:22.977632 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:08:22.977642 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:08:22.977653 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:08:22.977663 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:08:22.977674 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:08:22.977685 | orchestrator | 2026-03-28 02:08:22.977696 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 02:08:22.977707 | orchestrator | Saturday 28 March 2026 02:08:15 +0000 (0:00:00.172) 0:00:04.464 ******** 2026-03-28 02:08:22.977717 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:22.977729 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:22.977740 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:22.977780 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:22.977795 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:22.977813 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:22.977830 | orchestrator | 2026-03-28 02:08:22.977848 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 02:08:22.977867 | orchestrator | Saturday 28 March 2026 02:08:16 +0000 (0:00:00.589) 0:00:05.053 ******** 2026-03-28 02:08:22.977880 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:22.977891 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:22.977902 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:22.977912 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:22.977923 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:22.977934 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:22.977983 | orchestrator | 2026-03-28 02:08:22.978004 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 02:08:22.978094 | orchestrator | Saturday 28 March 2026 02:08:17 +0000 (0:00:00.878) 0:00:05.932 ******** 2026-03-28 02:08:22.978108 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-28 02:08:22.978119 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-28 02:08:22.978130 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-28 02:08:22.978141 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-28 02:08:22.978151 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-28 02:08:22.978162 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-28 02:08:22.978172 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-28 02:08:22.978183 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-28 02:08:22.978193 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-28 02:08:22.978204 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-28 02:08:22.978214 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-28 02:08:22.978225 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-28 02:08:22.978235 | orchestrator | 2026-03-28 02:08:22.978246 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 02:08:22.978257 | orchestrator | Saturday 28 March 2026 02:08:18 +0000 (0:00:01.157) 0:00:07.089 ******** 2026-03-28 02:08:22.978268 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:22.978279 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:22.978289 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:22.978300 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:22.978311 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:22.978321 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:22.978332 | orchestrator | 2026-03-28 02:08:22.978343 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 02:08:22.978355 | orchestrator | Saturday 28 March 2026 02:08:19 +0000 (0:00:01.177) 0:00:08.267 ******** 2026-03-28 02:08:22.978366 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-28 02:08:22.978377 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-28 02:08:22.978387 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-28 02:08:22.978398 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978431 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978443 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978453 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978464 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978475 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 02:08:22.978485 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978496 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978507 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978517 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978528 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978538 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-28 02:08:22.978549 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978560 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978571 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978582 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978592 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978613 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-28 02:08:22.978624 | orchestrator | 2026-03-28 02:08:22.978635 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 02:08:22.978647 | orchestrator | Saturday 28 March 2026 02:08:20 +0000 (0:00:01.245) 0:00:09.512 ******** 2026-03-28 02:08:22.978657 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:22.978668 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:22.978679 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:22.978689 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:22.978700 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:22.978710 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:22.978721 | orchestrator | 2026-03-28 02:08:22.978732 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 02:08:22.978742 | orchestrator | Saturday 28 March 2026 02:08:21 +0000 (0:00:00.178) 0:00:09.691 ******** 2026-03-28 02:08:22.978800 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:22.978819 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:22.978837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:22.978850 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:22.978860 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:22.978871 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:22.978882 | orchestrator | 2026-03-28 02:08:22.978892 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 02:08:22.978903 | orchestrator | Saturday 28 March 2026 02:08:21 +0000 (0:00:00.172) 0:00:09.864 ******** 2026-03-28 02:08:22.978914 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:22.978925 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:22.978935 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:22.978946 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:22.978956 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:22.978966 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:22.978977 | orchestrator | 2026-03-28 02:08:22.978996 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 02:08:22.979015 | orchestrator | Saturday 28 March 2026 02:08:21 +0000 (0:00:00.572) 0:00:10.437 ******** 2026-03-28 02:08:22.979034 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:22.979054 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:22.979071 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:22.979087 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:22.979098 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:22.979108 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:22.979118 | orchestrator | 2026-03-28 02:08:22.979129 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 02:08:22.979151 | orchestrator | Saturday 28 March 2026 02:08:21 +0000 (0:00:00.172) 0:00:10.610 ******** 2026-03-28 02:08:22.979162 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 02:08:22.979175 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 02:08:22.979193 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 02:08:22.979206 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 02:08:22.979216 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:22.979227 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:22.979237 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:22.979248 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:22.979258 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 02:08:22.979269 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:22.979279 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 02:08:22.979290 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:22.979300 | orchestrator | 2026-03-28 02:08:22.979311 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 02:08:22.979322 | orchestrator | Saturday 28 March 2026 02:08:22 +0000 (0:00:00.714) 0:00:11.324 ******** 2026-03-28 02:08:22.979341 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:22.979352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:22.979362 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:22.979373 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:22.979383 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:22.979394 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:22.979404 | orchestrator | 2026-03-28 02:08:22.979415 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 02:08:22.979426 | orchestrator | Saturday 28 March 2026 02:08:22 +0000 (0:00:00.168) 0:00:11.493 ******** 2026-03-28 02:08:22.979437 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:22.979447 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:22.979458 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:22.979468 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:22.979488 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:24.386104 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:24.386203 | orchestrator | 2026-03-28 02:08:24.386218 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 02:08:24.386232 | orchestrator | Saturday 28 March 2026 02:08:22 +0000 (0:00:00.152) 0:00:11.645 ******** 2026-03-28 02:08:24.386243 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:24.386254 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:24.386265 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:24.386275 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:24.386287 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:24.386298 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:24.386308 | orchestrator | 2026-03-28 02:08:24.386320 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 02:08:24.386330 | orchestrator | Saturday 28 March 2026 02:08:23 +0000 (0:00:00.172) 0:00:11.817 ******** 2026-03-28 02:08:24.386341 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:08:24.386352 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:08:24.386381 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:08:24.386392 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:08:24.386403 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:08:24.386414 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:08:24.386424 | orchestrator | 2026-03-28 02:08:24.386435 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 02:08:24.386446 | orchestrator | Saturday 28 March 2026 02:08:23 +0000 (0:00:00.673) 0:00:12.491 ******** 2026-03-28 02:08:24.386457 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:08:24.386467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:08:24.386479 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:08:24.386490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:08:24.386500 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:08:24.386511 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:08:24.386522 | orchestrator | 2026-03-28 02:08:24.386532 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:08:24.386545 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386557 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386568 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386579 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386592 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386632 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 02:08:24.386645 | orchestrator | 2026-03-28 02:08:24.386659 | orchestrator | 2026-03-28 02:08:24.386671 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:08:24.386684 | orchestrator | Saturday 28 March 2026 02:08:24 +0000 (0:00:00.285) 0:00:12.776 ******** 2026-03-28 02:08:24.386698 | orchestrator | =============================================================================== 2026-03-28 02:08:24.386711 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2026-03-28 02:08:24.386724 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2026-03-28 02:08:24.386738 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-03-28 02:08:24.386781 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-03-28 02:08:24.386795 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2026-03-28 02:08:24.386807 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-03-28 02:08:24.386820 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-28 02:08:24.386833 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-03-28 02:08:24.386845 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2026-03-28 02:08:24.386857 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-03-28 02:08:24.386870 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-03-28 02:08:24.386882 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-03-28 02:08:24.386894 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-03-28 02:08:24.386906 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-28 02:08:24.386919 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-03-28 02:08:24.386932 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-28 02:08:24.386944 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-28 02:08:24.386955 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-03-28 02:08:24.386965 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-28 02:08:24.746115 | orchestrator | + osism apply --environment custom facts 2026-03-28 02:08:26.961134 | orchestrator | 2026-03-28 02:08:26 | INFO  | Trying to run play facts in environment custom 2026-03-28 02:08:37.089304 | orchestrator | 2026-03-28 02:08:37 | INFO  | Task 086ed327-3ef4-4192-8c71-e2fd246a6c71 (facts) was prepared for execution. 2026-03-28 02:08:37.089414 | orchestrator | 2026-03-28 02:08:37 | INFO  | It takes a moment until task 086ed327-3ef4-4192-8c71-e2fd246a6c71 (facts) has been started and output is visible here. 2026-03-28 02:09:20.670452 | orchestrator | 2026-03-28 02:09:20.670569 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-28 02:09:20.670586 | orchestrator | 2026-03-28 02:09:20.670599 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 02:09:20.670610 | orchestrator | Saturday 28 March 2026 02:08:41 +0000 (0:00:00.088) 0:00:00.088 ******** 2026-03-28 02:09:20.670622 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:20.670634 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:09:20.670645 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:09:20.670656 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:09:20.670667 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.670678 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.670713 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.670726 | orchestrator | 2026-03-28 02:09:20.670737 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-28 02:09:20.670748 | orchestrator | Saturday 28 March 2026 02:08:42 +0000 (0:00:01.423) 0:00:01.512 ******** 2026-03-28 02:09:20.670759 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:20.670770 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.670781 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:09:20.670791 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:09:20.670802 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.670813 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:09:20.670824 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.670834 | orchestrator | 2026-03-28 02:09:20.670845 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-28 02:09:20.670890 | orchestrator | 2026-03-28 02:09:20.670903 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 02:09:20.670913 | orchestrator | Saturday 28 March 2026 02:08:43 +0000 (0:00:01.192) 0:00:02.705 ******** 2026-03-28 02:09:20.670924 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.670935 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.670959 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.670980 | orchestrator | 2026-03-28 02:09:20.670994 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 02:09:20.671008 | orchestrator | Saturday 28 March 2026 02:08:43 +0000 (0:00:00.092) 0:00:02.797 ******** 2026-03-28 02:09:20.671021 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.671032 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.671045 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.671057 | orchestrator | 2026-03-28 02:09:20.671069 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 02:09:20.671083 | orchestrator | Saturday 28 March 2026 02:08:44 +0000 (0:00:00.212) 0:00:03.009 ******** 2026-03-28 02:09:20.671095 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.671107 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.671119 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.671132 | orchestrator | 2026-03-28 02:09:20.671145 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 02:09:20.671159 | orchestrator | Saturday 28 March 2026 02:08:44 +0000 (0:00:00.221) 0:00:03.230 ******** 2026-03-28 02:09:20.671173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:09:20.671187 | orchestrator | 2026-03-28 02:09:20.671199 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 02:09:20.671212 | orchestrator | Saturday 28 March 2026 02:08:44 +0000 (0:00:00.160) 0:00:03.391 ******** 2026-03-28 02:09:20.671224 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.671237 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.671249 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.671262 | orchestrator | 2026-03-28 02:09:20.671275 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 02:09:20.671288 | orchestrator | Saturday 28 March 2026 02:08:44 +0000 (0:00:00.445) 0:00:03.837 ******** 2026-03-28 02:09:20.671301 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:09:20.671313 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:09:20.671326 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:09:20.671338 | orchestrator | 2026-03-28 02:09:20.671349 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 02:09:20.671360 | orchestrator | Saturday 28 March 2026 02:08:45 +0000 (0:00:00.131) 0:00:03.969 ******** 2026-03-28 02:09:20.671371 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.671382 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.671393 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.671403 | orchestrator | 2026-03-28 02:09:20.671414 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 02:09:20.671456 | orchestrator | Saturday 28 March 2026 02:08:46 +0000 (0:00:01.020) 0:00:04.989 ******** 2026-03-28 02:09:20.671467 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.671478 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.671489 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.671500 | orchestrator | 2026-03-28 02:09:20.671511 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 02:09:20.671521 | orchestrator | Saturday 28 March 2026 02:08:46 +0000 (0:00:00.458) 0:00:05.448 ******** 2026-03-28 02:09:20.671533 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.671544 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.671554 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.671565 | orchestrator | 2026-03-28 02:09:20.671623 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 02:09:20.671636 | orchestrator | Saturday 28 March 2026 02:08:47 +0000 (0:00:01.024) 0:00:06.472 ******** 2026-03-28 02:09:20.671647 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.671704 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.671718 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.671729 | orchestrator | 2026-03-28 02:09:20.671740 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-28 02:09:20.671751 | orchestrator | Saturday 28 March 2026 02:09:03 +0000 (0:00:16.289) 0:00:22.762 ******** 2026-03-28 02:09:20.671762 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:09:20.671772 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:09:20.671783 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:09:20.671794 | orchestrator | 2026-03-28 02:09:20.671805 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-28 02:09:20.671840 | orchestrator | Saturday 28 March 2026 02:09:03 +0000 (0:00:00.080) 0:00:22.842 ******** 2026-03-28 02:09:20.671883 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:20.671901 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:20.671919 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:20.671936 | orchestrator | 2026-03-28 02:09:20.671953 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 02:09:20.671978 | orchestrator | Saturday 28 March 2026 02:09:11 +0000 (0:00:07.688) 0:00:30.531 ******** 2026-03-28 02:09:20.671996 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.672015 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.672033 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.672052 | orchestrator | 2026-03-28 02:09:20.672071 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 02:09:20.672084 | orchestrator | Saturday 28 March 2026 02:09:12 +0000 (0:00:00.475) 0:00:31.007 ******** 2026-03-28 02:09:20.672095 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-28 02:09:20.672106 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-28 02:09:20.672117 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-28 02:09:20.672127 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-28 02:09:20.672138 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-28 02:09:20.672149 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-28 02:09:20.672159 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-28 02:09:20.672170 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-28 02:09:20.672180 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-28 02:09:20.672191 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-28 02:09:20.672201 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-28 02:09:20.672212 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-28 02:09:20.672223 | orchestrator | 2026-03-28 02:09:20.672233 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 02:09:20.672254 | orchestrator | Saturday 28 March 2026 02:09:15 +0000 (0:00:03.567) 0:00:34.574 ******** 2026-03-28 02:09:20.672265 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.672275 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.672286 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.672297 | orchestrator | 2026-03-28 02:09:20.672307 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 02:09:20.672318 | orchestrator | 2026-03-28 02:09:20.672329 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:09:20.672340 | orchestrator | Saturday 28 March 2026 02:09:17 +0000 (0:00:01.320) 0:00:35.895 ******** 2026-03-28 02:09:20.672351 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:09:20.672361 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:09:20.672372 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:09:20.672383 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:20.672394 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:20.672404 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:20.672415 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:20.672425 | orchestrator | 2026-03-28 02:09:20.672436 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:09:20.672448 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:09:20.672460 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:09:20.672472 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:09:20.672483 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:09:20.672494 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:09:20.672505 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:09:20.672516 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:09:20.672527 | orchestrator | 2026-03-28 02:09:20.672538 | orchestrator | 2026-03-28 02:09:20.672548 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:09:20.672559 | orchestrator | Saturday 28 March 2026 02:09:20 +0000 (0:00:03.604) 0:00:39.499 ******** 2026-03-28 02:09:20.672570 | orchestrator | =============================================================================== 2026-03-28 02:09:20.672581 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.29s 2026-03-28 02:09:20.672592 | orchestrator | Install required packages (Debian) -------------------------------------- 7.69s 2026-03-28 02:09:20.672602 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.60s 2026-03-28 02:09:20.672613 | orchestrator | Copy fact files --------------------------------------------------------- 3.57s 2026-03-28 02:09:20.672624 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-03-28 02:09:20.672634 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-03-28 02:09:20.672655 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-03-28 02:09:20.926922 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2026-03-28 02:09:20.926998 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2026-03-28 02:09:20.927019 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-28 02:09:20.927026 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-28 02:09:20.927046 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-28 02:09:20.927052 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-28 02:09:20.927057 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-28 02:09:20.927063 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-03-28 02:09:20.927069 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-28 02:09:20.927074 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-28 02:09:20.927080 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-03-28 02:09:21.289918 | orchestrator | + osism apply bootstrap 2026-03-28 02:09:33.332475 | orchestrator | 2026-03-28 02:09:33 | INFO  | Task 041451f7-51e7-4656-940e-df664ffc2f3c (bootstrap) was prepared for execution. 2026-03-28 02:09:33.332585 | orchestrator | 2026-03-28 02:09:33 | INFO  | It takes a moment until task 041451f7-51e7-4656-940e-df664ffc2f3c (bootstrap) has been started and output is visible here. 2026-03-28 02:09:49.753267 | orchestrator | 2026-03-28 02:09:49.753418 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-28 02:09:49.753447 | orchestrator | 2026-03-28 02:09:49.753469 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-28 02:09:49.753488 | orchestrator | Saturday 28 March 2026 02:09:37 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-03-28 02:09:49.753506 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:49.753518 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:49.753530 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:49.753541 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:49.753552 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:09:49.753563 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:09:49.753574 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:09:49.753585 | orchestrator | 2026-03-28 02:09:49.753597 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 02:09:49.753608 | orchestrator | 2026-03-28 02:09:49.753619 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:09:49.753630 | orchestrator | Saturday 28 March 2026 02:09:37 +0000 (0:00:00.279) 0:00:00.461 ******** 2026-03-28 02:09:49.753642 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:09:49.753653 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:09:49.753663 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:09:49.753675 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:49.753685 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:49.753696 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:49.753707 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:49.753744 | orchestrator | 2026-03-28 02:09:49.753757 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-28 02:09:49.753777 | orchestrator | 2026-03-28 02:09:49.753797 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:09:49.753816 | orchestrator | Saturday 28 March 2026 02:09:41 +0000 (0:00:03.840) 0:00:04.301 ******** 2026-03-28 02:09:49.753836 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 02:09:49.753855 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 02:09:49.753875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-28 02:09:49.753894 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 02:09:49.753943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 02:09:49.753962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 02:09:49.753981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 02:09:49.754000 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 02:09:49.754077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 02:09:49.754120 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 02:09:49.754131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 02:09:49.754142 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-28 02:09:49.754153 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 02:09:49.754163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 02:09:49.754174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-28 02:09:49.754185 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:09:49.754196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 02:09:49.754207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-28 02:09:49.754218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 02:09:49.754229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 02:09:49.754239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 02:09:49.754249 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 02:09:49.754260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 02:09:49.754270 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:09:49.754281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 02:09:49.754291 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-28 02:09:49.754302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 02:09:49.754313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 02:09:49.754323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 02:09:49.754334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 02:09:49.754345 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 02:09:49.754355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 02:09:49.754366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 02:09:49.754376 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 02:09:49.754387 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:09:49.754397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 02:09:49.754408 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 02:09:49.754419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-28 02:09:49.754429 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 02:09:49.754440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 02:09:49.754450 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:09:49.754461 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 02:09:49.754471 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 02:09:49.754482 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 02:09:49.754493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 02:09:49.754504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 02:09:49.754535 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 02:09:49.754547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:09:49.754558 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 02:09:49.754568 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 02:09:49.754579 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 02:09:49.754589 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:09:49.754600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 02:09:49.754611 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 02:09:49.754647 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 02:09:49.754658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:09:49.754669 | orchestrator | 2026-03-28 02:09:49.754680 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-28 02:09:49.754690 | orchestrator | 2026-03-28 02:09:49.754701 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-28 02:09:49.754712 | orchestrator | Saturday 28 March 2026 02:09:42 +0000 (0:00:00.468) 0:00:04.770 ******** 2026-03-28 02:09:49.754723 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:49.754733 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:49.754744 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:49.754755 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:49.754765 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:09:49.754776 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:09:49.754786 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:09:49.754797 | orchestrator | 2026-03-28 02:09:49.754808 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-28 02:09:49.754819 | orchestrator | Saturday 28 March 2026 02:09:43 +0000 (0:00:01.255) 0:00:06.026 ******** 2026-03-28 02:09:49.754830 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:49.754840 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:09:49.754851 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:09:49.754861 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:09:49.754872 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:09:49.754882 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:09:49.754893 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:09:49.754933 | orchestrator | 2026-03-28 02:09:49.754949 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-28 02:09:49.754960 | orchestrator | Saturday 28 March 2026 02:09:44 +0000 (0:00:01.216) 0:00:07.243 ******** 2026-03-28 02:09:49.754972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:09:49.754985 | orchestrator | 2026-03-28 02:09:49.754996 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-28 02:09:49.755007 | orchestrator | Saturday 28 March 2026 02:09:45 +0000 (0:00:00.282) 0:00:07.525 ******** 2026-03-28 02:09:49.755017 | orchestrator | changed: [testbed-manager] 2026-03-28 02:09:49.755028 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:49.755039 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:49.755049 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:09:49.755060 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:49.755071 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:09:49.755082 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:09:49.755101 | orchestrator | 2026-03-28 02:09:49.755120 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-28 02:09:49.755139 | orchestrator | Saturday 28 March 2026 02:09:47 +0000 (0:00:02.208) 0:00:09.734 ******** 2026-03-28 02:09:49.755153 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:09:49.755165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:09:49.755179 | orchestrator | 2026-03-28 02:09:49.755190 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-28 02:09:49.755201 | orchestrator | Saturday 28 March 2026 02:09:47 +0000 (0:00:00.247) 0:00:09.981 ******** 2026-03-28 02:09:49.755211 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:49.755222 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:49.755233 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:49.755243 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:09:49.755254 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:09:49.755265 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:09:49.755284 | orchestrator | 2026-03-28 02:09:49.755300 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-28 02:09:49.755311 | orchestrator | Saturday 28 March 2026 02:09:48 +0000 (0:00:01.112) 0:00:11.094 ******** 2026-03-28 02:09:49.755322 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:09:49.755333 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:09:49.755343 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:09:49.755354 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:09:49.755365 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:09:49.755375 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:09:49.755386 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:09:49.755396 | orchestrator | 2026-03-28 02:09:49.755407 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-28 02:09:49.755418 | orchestrator | Saturday 28 March 2026 02:09:49 +0000 (0:00:00.596) 0:00:11.690 ******** 2026-03-28 02:09:49.755429 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:09:49.755440 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:09:49.755450 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:09:49.755461 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:09:49.755472 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:09:49.755482 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:09:49.755493 | orchestrator | ok: [testbed-manager] 2026-03-28 02:09:49.755504 | orchestrator | 2026-03-28 02:09:49.755515 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 02:09:49.755527 | orchestrator | Saturday 28 March 2026 02:09:49 +0000 (0:00:00.425) 0:00:12.116 ******** 2026-03-28 02:09:49.755537 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:09:49.755548 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:09:49.755567 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:10:01.609701 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:10:01.609801 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:10:01.609812 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:10:01.609820 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:10:01.609829 | orchestrator | 2026-03-28 02:10:01.609838 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 02:10:01.609847 | orchestrator | Saturday 28 March 2026 02:09:49 +0000 (0:00:00.238) 0:00:12.354 ******** 2026-03-28 02:10:01.609857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:01.609881 | orchestrator | 2026-03-28 02:10:01.609890 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 02:10:01.609899 | orchestrator | Saturday 28 March 2026 02:09:50 +0000 (0:00:00.339) 0:00:12.694 ******** 2026-03-28 02:10:01.609907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:01.609915 | orchestrator | 2026-03-28 02:10:01.609985 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 02:10:01.609994 | orchestrator | Saturday 28 March 2026 02:09:50 +0000 (0:00:00.289) 0:00:12.984 ******** 2026-03-28 02:10:01.610002 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.610011 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610071 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.610079 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.610087 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.610095 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.610103 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.610111 | orchestrator | 2026-03-28 02:10:01.610119 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 02:10:01.610127 | orchestrator | Saturday 28 March 2026 02:09:51 +0000 (0:00:01.409) 0:00:14.393 ******** 2026-03-28 02:10:01.610156 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:10:01.610164 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:10:01.610172 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:10:01.610180 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:10:01.610187 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:10:01.610195 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:10:01.610202 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:10:01.610210 | orchestrator | 2026-03-28 02:10:01.610218 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 02:10:01.610226 | orchestrator | Saturday 28 March 2026 02:09:52 +0000 (0:00:00.230) 0:00:14.623 ******** 2026-03-28 02:10:01.610234 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610243 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.610252 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.610261 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.610270 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.610278 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.610287 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.610296 | orchestrator | 2026-03-28 02:10:01.610305 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 02:10:01.610314 | orchestrator | Saturday 28 March 2026 02:09:52 +0000 (0:00:00.524) 0:00:15.147 ******** 2026-03-28 02:10:01.610322 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:10:01.610331 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:10:01.610340 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:10:01.610349 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:10:01.610358 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:10:01.610366 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:10:01.610376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:10:01.610385 | orchestrator | 2026-03-28 02:10:01.610394 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 02:10:01.610405 | orchestrator | Saturday 28 March 2026 02:09:52 +0000 (0:00:00.337) 0:00:15.485 ******** 2026-03-28 02:10:01.610413 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610422 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:01.610431 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:01.610440 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:01.610449 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:01.610457 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:01.610474 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:01.610483 | orchestrator | 2026-03-28 02:10:01.610492 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 02:10:01.610501 | orchestrator | Saturday 28 March 2026 02:09:53 +0000 (0:00:00.522) 0:00:16.007 ******** 2026-03-28 02:10:01.610510 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610519 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:01.610527 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:01.610536 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:01.610545 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:01.610554 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:01.610563 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:01.610571 | orchestrator | 2026-03-28 02:10:01.610580 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 02:10:01.610589 | orchestrator | Saturday 28 March 2026 02:09:54 +0000 (0:00:01.064) 0:00:17.072 ******** 2026-03-28 02:10:01.610599 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.610607 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610615 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.610622 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.610630 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.610638 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.610645 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.610653 | orchestrator | 2026-03-28 02:10:01.610661 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 02:10:01.610675 | orchestrator | Saturday 28 March 2026 02:09:55 +0000 (0:00:01.048) 0:00:18.120 ******** 2026-03-28 02:10:01.610699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:01.610708 | orchestrator | 2026-03-28 02:10:01.610715 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 02:10:01.610723 | orchestrator | Saturday 28 March 2026 02:09:55 +0000 (0:00:00.303) 0:00:18.423 ******** 2026-03-28 02:10:01.610731 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:10:01.610741 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:01.610754 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:01.610768 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:01.610781 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:01.610793 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:01.610805 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:01.610817 | orchestrator | 2026-03-28 02:10:01.610830 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 02:10:01.610843 | orchestrator | Saturday 28 March 2026 02:09:57 +0000 (0:00:01.270) 0:00:19.694 ******** 2026-03-28 02:10:01.610856 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610869 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.610882 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.610895 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.610907 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.610915 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.610938 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.610947 | orchestrator | 2026-03-28 02:10:01.610955 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 02:10:01.610962 | orchestrator | Saturday 28 March 2026 02:09:57 +0000 (0:00:00.230) 0:00:19.924 ******** 2026-03-28 02:10:01.610970 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.610978 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.610985 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.610993 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611001 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.611008 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.611016 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.611024 | orchestrator | 2026-03-28 02:10:01.611031 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 02:10:01.611039 | orchestrator | Saturday 28 March 2026 02:09:57 +0000 (0:00:00.247) 0:00:20.171 ******** 2026-03-28 02:10:01.611047 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.611055 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.611062 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.611070 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611077 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.611085 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.611092 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.611100 | orchestrator | 2026-03-28 02:10:01.611108 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 02:10:01.611116 | orchestrator | Saturday 28 March 2026 02:09:57 +0000 (0:00:00.217) 0:00:20.389 ******** 2026-03-28 02:10:01.611124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:01.611134 | orchestrator | 2026-03-28 02:10:01.611142 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 02:10:01.611149 | orchestrator | Saturday 28 March 2026 02:09:58 +0000 (0:00:00.299) 0:00:20.688 ******** 2026-03-28 02:10:01.611157 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.611165 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.611179 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.611187 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611195 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.611202 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.611210 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.611218 | orchestrator | 2026-03-28 02:10:01.611225 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 02:10:01.611233 | orchestrator | Saturday 28 March 2026 02:09:58 +0000 (0:00:00.526) 0:00:21.215 ******** 2026-03-28 02:10:01.611241 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:10:01.611249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:10:01.611257 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:10:01.611264 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:10:01.611272 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:10:01.611280 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:10:01.611287 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:10:01.611295 | orchestrator | 2026-03-28 02:10:01.611303 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 02:10:01.611311 | orchestrator | Saturday 28 March 2026 02:09:58 +0000 (0:00:00.215) 0:00:21.430 ******** 2026-03-28 02:10:01.611318 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.611326 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.611334 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611342 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.611349 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:01.611357 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:01.611365 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:01.611373 | orchestrator | 2026-03-28 02:10:01.611380 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 02:10:01.611388 | orchestrator | Saturday 28 March 2026 02:09:59 +0000 (0:00:00.993) 0:00:22.424 ******** 2026-03-28 02:10:01.611396 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.611403 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.611411 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.611419 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:01.611426 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611434 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:01.611441 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:01.611449 | orchestrator | 2026-03-28 02:10:01.611457 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 02:10:01.611465 | orchestrator | Saturday 28 March 2026 02:10:00 +0000 (0:00:00.558) 0:00:22.982 ******** 2026-03-28 02:10:01.611473 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:01.611480 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:01.611495 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:01.611503 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:01.611517 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.360357 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.360486 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.360503 | orchestrator | 2026-03-28 02:10:42.360516 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 02:10:42.360529 | orchestrator | Saturday 28 March 2026 02:10:01 +0000 (0:00:01.114) 0:00:24.097 ******** 2026-03-28 02:10:42.360547 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.360567 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.360585 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.360603 | orchestrator | changed: [testbed-manager] 2026-03-28 02:10:42.360622 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.360643 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.360661 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.360680 | orchestrator | 2026-03-28 02:10:42.360693 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-28 02:10:42.360704 | orchestrator | Saturday 28 March 2026 02:10:17 +0000 (0:00:16.100) 0:00:40.198 ******** 2026-03-28 02:10:42.360715 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.360751 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.360762 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.360773 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.360784 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.360795 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.360805 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.360816 | orchestrator | 2026-03-28 02:10:42.360827 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-28 02:10:42.360838 | orchestrator | Saturday 28 March 2026 02:10:17 +0000 (0:00:00.214) 0:00:40.413 ******** 2026-03-28 02:10:42.360849 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.360860 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.360870 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.360883 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.360902 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.360919 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.360937 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.360954 | orchestrator | 2026-03-28 02:10:42.361046 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-28 02:10:42.361070 | orchestrator | Saturday 28 March 2026 02:10:18 +0000 (0:00:00.232) 0:00:40.645 ******** 2026-03-28 02:10:42.361088 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.361106 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.361124 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.361142 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.361160 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.361178 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.361198 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.361216 | orchestrator | 2026-03-28 02:10:42.361235 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-28 02:10:42.361255 | orchestrator | Saturday 28 March 2026 02:10:18 +0000 (0:00:00.233) 0:00:40.878 ******** 2026-03-28 02:10:42.361277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:42.361300 | orchestrator | 2026-03-28 02:10:42.361320 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-28 02:10:42.361339 | orchestrator | Saturday 28 March 2026 02:10:18 +0000 (0:00:00.294) 0:00:41.172 ******** 2026-03-28 02:10:42.361358 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.361376 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.361395 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.361413 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.361430 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.361447 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.361465 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.361483 | orchestrator | 2026-03-28 02:10:42.361501 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-28 02:10:42.361520 | orchestrator | Saturday 28 March 2026 02:10:20 +0000 (0:00:01.692) 0:00:42.865 ******** 2026-03-28 02:10:42.361538 | orchestrator | changed: [testbed-manager] 2026-03-28 02:10:42.361556 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:42.361575 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:42.361593 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:42.361611 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.361630 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.361650 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.361669 | orchestrator | 2026-03-28 02:10:42.361687 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-28 02:10:42.361720 | orchestrator | Saturday 28 March 2026 02:10:21 +0000 (0:00:01.149) 0:00:44.014 ******** 2026-03-28 02:10:42.361731 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.361742 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.361753 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.361778 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.361789 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.361800 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.361811 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.361821 | orchestrator | 2026-03-28 02:10:42.361833 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-28 02:10:42.361844 | orchestrator | Saturday 28 March 2026 02:10:22 +0000 (0:00:00.819) 0:00:44.833 ******** 2026-03-28 02:10:42.361856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:42.361869 | orchestrator | 2026-03-28 02:10:42.361880 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-28 02:10:42.361892 | orchestrator | Saturday 28 March 2026 02:10:22 +0000 (0:00:00.317) 0:00:45.151 ******** 2026-03-28 02:10:42.361903 | orchestrator | changed: [testbed-manager] 2026-03-28 02:10:42.361914 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:42.361924 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:42.361935 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.361946 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.361957 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:42.361968 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.361979 | orchestrator | 2026-03-28 02:10:42.362110 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-28 02:10:42.362129 | orchestrator | Saturday 28 March 2026 02:10:23 +0000 (0:00:01.018) 0:00:46.169 ******** 2026-03-28 02:10:42.362140 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:10:42.362151 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:10:42.362162 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:10:42.362173 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:10:42.362184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:10:42.362194 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:10:42.362205 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:10:42.362216 | orchestrator | 2026-03-28 02:10:42.362227 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-28 02:10:42.362238 | orchestrator | Saturday 28 March 2026 02:10:23 +0000 (0:00:00.220) 0:00:46.390 ******** 2026-03-28 02:10:42.362250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:42.362261 | orchestrator | 2026-03-28 02:10:42.362272 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-28 02:10:42.362283 | orchestrator | Saturday 28 March 2026 02:10:24 +0000 (0:00:00.310) 0:00:46.700 ******** 2026-03-28 02:10:42.362294 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.362305 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.362316 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.362326 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.362337 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.362348 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.362358 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.362369 | orchestrator | 2026-03-28 02:10:42.362380 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-28 02:10:42.362391 | orchestrator | Saturday 28 March 2026 02:10:25 +0000 (0:00:01.722) 0:00:48.422 ******** 2026-03-28 02:10:42.362401 | orchestrator | changed: [testbed-manager] 2026-03-28 02:10:42.362412 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:42.362423 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:42.362434 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:42.362444 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.362455 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.362466 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.362485 | orchestrator | 2026-03-28 02:10:42.362496 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-28 02:10:42.362508 | orchestrator | Saturday 28 March 2026 02:10:27 +0000 (0:00:01.131) 0:00:49.554 ******** 2026-03-28 02:10:42.362518 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:10:42.362529 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:10:42.362547 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:10:42.362565 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:10:42.362583 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:10:42.362600 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:10:42.362619 | orchestrator | changed: [testbed-manager] 2026-03-28 02:10:42.362639 | orchestrator | 2026-03-28 02:10:42.362657 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-28 02:10:42.362676 | orchestrator | Saturday 28 March 2026 02:10:39 +0000 (0:00:12.313) 0:01:01.868 ******** 2026-03-28 02:10:42.362688 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.362699 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.362709 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.362720 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.362731 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.362741 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.362752 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.362762 | orchestrator | 2026-03-28 02:10:42.362773 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-28 02:10:42.362784 | orchestrator | Saturday 28 March 2026 02:10:40 +0000 (0:00:01.286) 0:01:03.154 ******** 2026-03-28 02:10:42.362795 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.362806 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.362816 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.362827 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.362838 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.362848 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.362859 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.362869 | orchestrator | 2026-03-28 02:10:42.362880 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-28 02:10:42.362891 | orchestrator | Saturday 28 March 2026 02:10:41 +0000 (0:00:00.892) 0:01:04.047 ******** 2026-03-28 02:10:42.362910 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.362921 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.362931 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.362942 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.362953 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.362963 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.362974 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.362984 | orchestrator | 2026-03-28 02:10:42.363064 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-28 02:10:42.363076 | orchestrator | Saturday 28 March 2026 02:10:41 +0000 (0:00:00.228) 0:01:04.276 ******** 2026-03-28 02:10:42.363090 | orchestrator | ok: [testbed-manager] 2026-03-28 02:10:42.363108 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:10:42.363127 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:10:42.363145 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:10:42.363162 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:10:42.363180 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:10:42.363197 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:10:42.363216 | orchestrator | 2026-03-28 02:10:42.363234 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-28 02:10:42.363253 | orchestrator | Saturday 28 March 2026 02:10:42 +0000 (0:00:00.247) 0:01:04.523 ******** 2026-03-28 02:10:42.363275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:10:42.363295 | orchestrator | 2026-03-28 02:10:42.363319 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-28 02:13:08.154788 | orchestrator | Saturday 28 March 2026 02:10:42 +0000 (0:00:00.323) 0:01:04.847 ******** 2026-03-28 02:13:08.154891 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.154903 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.154912 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.154920 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.154928 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.154936 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.154944 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.154952 | orchestrator | 2026-03-28 02:13:08.154961 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-28 02:13:08.154969 | orchestrator | Saturday 28 March 2026 02:10:43 +0000 (0:00:01.615) 0:01:06.462 ******** 2026-03-28 02:13:08.154977 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:08.154986 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:08.154993 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:08.155001 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:08.155009 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:08.155017 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:08.155025 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:08.155032 | orchestrator | 2026-03-28 02:13:08.155040 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-28 02:13:08.155049 | orchestrator | Saturday 28 March 2026 02:10:44 +0000 (0:00:00.586) 0:01:07.049 ******** 2026-03-28 02:13:08.155057 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.155065 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155072 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155080 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155087 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155095 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155103 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155110 | orchestrator | 2026-03-28 02:13:08.155119 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-28 02:13:08.155127 | orchestrator | Saturday 28 March 2026 02:10:44 +0000 (0:00:00.246) 0:01:07.295 ******** 2026-03-28 02:13:08.155135 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.155143 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155151 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155159 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155166 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155174 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155182 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155189 | orchestrator | 2026-03-28 02:13:08.155198 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-28 02:13:08.155205 | orchestrator | Saturday 28 March 2026 02:10:45 +0000 (0:00:01.170) 0:01:08.466 ******** 2026-03-28 02:13:08.155212 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:08.155261 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:08.155269 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:08.155277 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:08.155285 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:08.155293 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:08.155301 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:08.155308 | orchestrator | 2026-03-28 02:13:08.155319 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-28 02:13:08.155326 | orchestrator | Saturday 28 March 2026 02:10:47 +0000 (0:00:01.658) 0:01:10.124 ******** 2026-03-28 02:13:08.155334 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.155343 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155351 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155359 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155366 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155375 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155383 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155391 | orchestrator | 2026-03-28 02:13:08.155399 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-28 02:13:08.155429 | orchestrator | Saturday 28 March 2026 02:10:49 +0000 (0:00:02.315) 0:01:12.440 ******** 2026-03-28 02:13:08.155437 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.155445 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155452 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155460 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155468 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155475 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155483 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155490 | orchestrator | 2026-03-28 02:13:08.155498 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-28 02:13:08.155507 | orchestrator | Saturday 28 March 2026 02:11:33 +0000 (0:00:43.960) 0:01:56.400 ******** 2026-03-28 02:13:08.155514 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:08.155522 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:08.155530 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:08.155538 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:08.155546 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:08.155554 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:08.155562 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:08.155570 | orchestrator | 2026-03-28 02:13:08.155578 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-28 02:13:08.155586 | orchestrator | Saturday 28 March 2026 02:12:51 +0000 (0:01:17.769) 0:03:14.169 ******** 2026-03-28 02:13:08.155594 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:08.155603 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155611 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155618 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155626 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155633 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155641 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155648 | orchestrator | 2026-03-28 02:13:08.155655 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-28 02:13:08.155663 | orchestrator | Saturday 28 March 2026 02:12:53 +0000 (0:00:01.811) 0:03:15.981 ******** 2026-03-28 02:13:08.155670 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:08.155678 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:08.155686 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:08.155693 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:08.155700 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:08.155707 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:08.155715 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:08.155722 | orchestrator | 2026-03-28 02:13:08.155729 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-28 02:13:08.155737 | orchestrator | Saturday 28 March 2026 02:13:06 +0000 (0:00:13.339) 0:03:29.322 ******** 2026-03-28 02:13:08.155774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-28 02:13:08.155799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-28 02:13:08.155816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-28 02:13:08.155826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 02:13:08.155835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 02:13:08.155843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-28 02:13:08.155851 | orchestrator | 2026-03-28 02:13:08.155858 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-28 02:13:08.155866 | orchestrator | Saturday 28 March 2026 02:13:07 +0000 (0:00:00.458) 0:03:29.781 ******** 2026-03-28 02:13:08.155874 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 02:13:08.155881 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:08.155889 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 02:13:08.155896 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 02:13:08.155904 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:08.155911 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:08.155922 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 02:13:08.155930 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:08.155937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:13:08.155945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:13:08.155952 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:13:08.155960 | orchestrator | 2026-03-28 02:13:08.155967 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-28 02:13:08.155975 | orchestrator | Saturday 28 March 2026 02:13:08 +0000 (0:00:00.751) 0:03:30.532 ******** 2026-03-28 02:13:08.155982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 02:13:08.155991 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 02:13:08.155998 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 02:13:08.156006 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 02:13:08.156013 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 02:13:08.156025 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 02:13:14.914265 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 02:13:14.914400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 02:13:14.914459 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 02:13:14.914481 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 02:13:14.914501 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 02:13:14.914521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 02:13:14.914541 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 02:13:14.914561 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 02:13:14.914580 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 02:13:14.914600 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 02:13:14.914620 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 02:13:14.914638 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 02:13:14.914649 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 02:13:14.914660 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 02:13:14.914670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 02:13:14.914681 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 02:13:14.914691 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 02:13:14.914702 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 02:13:14.914713 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 02:13:14.914724 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 02:13:14.914735 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 02:13:14.914745 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 02:13:14.914760 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 02:13:14.914779 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 02:13:14.914798 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 02:13:14.914815 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 02:13:14.914832 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 02:13:14.914849 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 02:13:14.914868 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 02:13:14.914907 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 02:13:14.914926 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 02:13:14.914937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 02:13:14.914948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 02:13:14.914959 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 02:13:14.914981 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:14.914994 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:14.915005 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:14.915016 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:14.915027 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 02:13:14.915037 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 02:13:14.915048 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 02:13:14.915059 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 02:13:14.915070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 02:13:14.915112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 02:13:14.915133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 02:13:14.915144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 02:13:14.915155 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 02:13:14.915165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 02:13:14.915187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 02:13:14.915197 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 02:13:14.915218 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 02:13:14.915262 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 02:13:14.915274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 02:13:14.915296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 02:13:14.915306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 02:13:14.915317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 02:13:14.915327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 02:13:14.915338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 02:13:14.915349 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 02:13:14.915359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 02:13:14.915370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 02:13:14.915380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 02:13:14.915392 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 02:13:14.915411 | orchestrator | 2026-03-28 02:13:14.915423 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-28 02:13:14.915433 | orchestrator | Saturday 28 March 2026 02:13:13 +0000 (0:00:05.696) 0:03:36.229 ******** 2026-03-28 02:13:14.915444 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915455 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915476 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 02:13:14.915525 | orchestrator | 2026-03-28 02:13:14.915535 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-28 02:13:14.915546 | orchestrator | Saturday 28 March 2026 02:13:14 +0000 (0:00:00.647) 0:03:36.876 ******** 2026-03-28 02:13:14.915556 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:14.915567 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:14.915578 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:14.915589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:14.915599 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:13:14.915610 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:13:14.915621 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:14.915632 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:13:14.915642 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:14.915653 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:14.915671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:28.053539 | orchestrator | 2026-03-28 02:13:28.053655 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-28 02:13:28.053671 | orchestrator | Saturday 28 March 2026 02:13:14 +0000 (0:00:00.524) 0:03:37.401 ******** 2026-03-28 02:13:28.053683 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:28.053696 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:28.053707 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:28.053720 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:28.053731 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:28.053742 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 02:13:28.053753 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:28.053764 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:28.053775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:28.053787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:28.053798 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 02:13:28.053809 | orchestrator | 2026-03-28 02:13:28.053820 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-28 02:13:28.053856 | orchestrator | Saturday 28 March 2026 02:13:15 +0000 (0:00:00.631) 0:03:38.032 ******** 2026-03-28 02:13:28.053868 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 02:13:28.053879 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:28.053890 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 02:13:28.053901 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 02:13:28.053911 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:13:28.053922 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:13:28.053933 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 02:13:28.053944 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:13:28.053955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 02:13:28.053966 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 02:13:28.053977 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 02:13:28.053988 | orchestrator | 2026-03-28 02:13:28.053999 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-28 02:13:28.054010 | orchestrator | Saturday 28 March 2026 02:13:16 +0000 (0:00:00.605) 0:03:38.638 ******** 2026-03-28 02:13:28.054089 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:28.054103 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:28.054115 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:28.054128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:28.054141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:13:28.054154 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:13:28.054165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:13:28.054176 | orchestrator | 2026-03-28 02:13:28.054187 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-28 02:13:28.054198 | orchestrator | Saturday 28 March 2026 02:13:16 +0000 (0:00:00.329) 0:03:38.967 ******** 2026-03-28 02:13:28.054212 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:28.054231 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:28.054280 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:28.054293 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:28.054304 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:28.054314 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:28.054325 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:28.054336 | orchestrator | 2026-03-28 02:13:28.054347 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-28 02:13:28.054358 | orchestrator | Saturday 28 March 2026 02:13:22 +0000 (0:00:05.684) 0:03:44.652 ******** 2026-03-28 02:13:28.054369 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-28 02:13:28.054380 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-28 02:13:28.054391 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:28.054402 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-28 02:13:28.054413 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:28.054423 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-28 02:13:28.054434 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:28.054445 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-28 02:13:28.054456 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:28.054467 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:13:28.054495 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-28 02:13:28.054507 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:13:28.054518 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-28 02:13:28.054529 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:13:28.054540 | orchestrator | 2026-03-28 02:13:28.054560 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-28 02:13:28.054572 | orchestrator | Saturday 28 March 2026 02:13:22 +0000 (0:00:00.308) 0:03:44.961 ******** 2026-03-28 02:13:28.054582 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-28 02:13:28.054593 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-28 02:13:28.054604 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-28 02:13:28.054633 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-28 02:13:28.054645 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-28 02:13:28.054656 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-28 02:13:28.054666 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-28 02:13:28.054677 | orchestrator | 2026-03-28 02:13:28.054688 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-28 02:13:28.054699 | orchestrator | Saturday 28 March 2026 02:13:23 +0000 (0:00:01.025) 0:03:45.986 ******** 2026-03-28 02:13:28.054711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:13:28.054724 | orchestrator | 2026-03-28 02:13:28.054735 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-28 02:13:28.054746 | orchestrator | Saturday 28 March 2026 02:13:23 +0000 (0:00:00.505) 0:03:46.492 ******** 2026-03-28 02:13:28.054757 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:28.054767 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:28.054778 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:28.054789 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:28.054799 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:28.054810 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:28.054820 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:28.054831 | orchestrator | 2026-03-28 02:13:28.054842 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-28 02:13:28.054854 | orchestrator | Saturday 28 March 2026 02:13:25 +0000 (0:00:01.237) 0:03:47.730 ******** 2026-03-28 02:13:28.054873 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:28.054887 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:28.054900 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:28.054918 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:28.054935 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:28.054946 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:28.054956 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:28.054967 | orchestrator | 2026-03-28 02:13:28.054978 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-28 02:13:28.054989 | orchestrator | Saturday 28 March 2026 02:13:25 +0000 (0:00:00.654) 0:03:48.384 ******** 2026-03-28 02:13:28.054999 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:28.055010 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:28.055021 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:28.055031 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:28.055042 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:28.055053 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:28.055063 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:28.055074 | orchestrator | 2026-03-28 02:13:28.055085 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-28 02:13:28.055096 | orchestrator | Saturday 28 March 2026 02:13:26 +0000 (0:00:00.604) 0:03:48.989 ******** 2026-03-28 02:13:28.055107 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:28.055117 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:28.055128 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:28.055139 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:28.055149 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:28.055160 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:28.055170 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:28.055181 | orchestrator | 2026-03-28 02:13:28.055192 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-28 02:13:28.055209 | orchestrator | Saturday 28 March 2026 02:13:27 +0000 (0:00:00.596) 0:03:49.586 ******** 2026-03-28 02:13:28.055230 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662265.0436647, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:28.055281 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662296.97299, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:28.055294 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662295.2248669, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:28.055328 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662289.5018508, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.446893 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662294.658343, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.446975 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662297.0381422, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.446984 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774662291.3094282, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447019 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447037 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447043 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447049 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447073 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447079 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447085 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 02:13:33.447096 | orchestrator | 2026-03-28 02:13:33.447103 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-28 02:13:33.447110 | orchestrator | Saturday 28 March 2026 02:13:28 +0000 (0:00:00.956) 0:03:50.543 ******** 2026-03-28 02:13:33.447116 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:33.447123 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:33.447129 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:33.447134 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:33.447140 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:33.447145 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:33.447151 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:33.447156 | orchestrator | 2026-03-28 02:13:33.447162 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-28 02:13:33.447167 | orchestrator | Saturday 28 March 2026 02:13:29 +0000 (0:00:01.025) 0:03:51.569 ******** 2026-03-28 02:13:33.447173 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:33.447178 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:33.447183 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:33.447189 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:33.447199 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:33.447209 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:33.447218 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:33.447227 | orchestrator | 2026-03-28 02:13:33.447240 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-28 02:13:33.447249 | orchestrator | Saturday 28 March 2026 02:13:30 +0000 (0:00:01.877) 0:03:53.446 ******** 2026-03-28 02:13:33.447315 | orchestrator | changed: [testbed-manager] 2026-03-28 02:13:33.447324 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:13:33.447333 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:13:33.447342 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:13:33.447351 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:13:33.447360 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:13:33.447369 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:13:33.447377 | orchestrator | 2026-03-28 02:13:33.447387 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-28 02:13:33.447394 | orchestrator | Saturday 28 March 2026 02:13:31 +0000 (0:00:01.035) 0:03:54.482 ******** 2026-03-28 02:13:33.447399 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:13:33.447405 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:13:33.447410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:13:33.447415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:13:33.447421 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:13:33.447426 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:13:33.447432 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:13:33.447438 | orchestrator | 2026-03-28 02:13:33.447444 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-28 02:13:33.447451 | orchestrator | Saturday 28 March 2026 02:13:32 +0000 (0:00:00.302) 0:03:54.785 ******** 2026-03-28 02:13:33.447457 | orchestrator | ok: [testbed-manager] 2026-03-28 02:13:33.447464 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:13:33.447471 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:13:33.447476 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:13:33.447482 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:13:33.447488 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:13:33.447494 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:13:33.447500 | orchestrator | 2026-03-28 02:13:33.447506 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-28 02:13:33.447512 | orchestrator | Saturday 28 March 2026 02:13:32 +0000 (0:00:00.704) 0:03:55.489 ******** 2026-03-28 02:13:33.447520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:13:33.447535 | orchestrator | 2026-03-28 02:13:33.447541 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-28 02:13:33.447554 | orchestrator | Saturday 28 March 2026 02:13:33 +0000 (0:00:00.447) 0:03:55.937 ******** 2026-03-28 02:14:49.753602 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.753702 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:14:49.753713 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:14:49.753720 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:14:49.753727 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:14:49.753733 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:14:49.753740 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:14:49.753747 | orchestrator | 2026-03-28 02:14:49.753754 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-28 02:14:49.753762 | orchestrator | Saturday 28 March 2026 02:13:41 +0000 (0:00:07.789) 0:04:03.727 ******** 2026-03-28 02:14:49.753768 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.753775 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.753781 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.753788 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.753794 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.753801 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.753808 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.753814 | orchestrator | 2026-03-28 02:14:49.753821 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-28 02:14:49.753828 | orchestrator | Saturday 28 March 2026 02:13:42 +0000 (0:00:01.188) 0:04:04.915 ******** 2026-03-28 02:14:49.753833 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.753841 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.753847 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.753854 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.753860 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.753866 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.753872 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.753878 | orchestrator | 2026-03-28 02:14:49.753885 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-28 02:14:49.753892 | orchestrator | Saturday 28 March 2026 02:13:44 +0000 (0:00:01.895) 0:04:06.810 ******** 2026-03-28 02:14:49.753898 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.753905 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.753912 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.753918 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.753925 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.753932 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.753938 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.753944 | orchestrator | 2026-03-28 02:14:49.753951 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-28 02:14:49.753959 | orchestrator | Saturday 28 March 2026 02:13:44 +0000 (0:00:00.309) 0:04:07.120 ******** 2026-03-28 02:14:49.753965 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.753971 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.753978 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.753985 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.753991 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.753997 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.754004 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.754011 | orchestrator | 2026-03-28 02:14:49.754059 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-28 02:14:49.754067 | orchestrator | Saturday 28 March 2026 02:13:44 +0000 (0:00:00.328) 0:04:07.449 ******** 2026-03-28 02:14:49.754074 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.754080 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.754088 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.754116 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.754123 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.754129 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.754135 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.754142 | orchestrator | 2026-03-28 02:14:49.754148 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-28 02:14:49.754155 | orchestrator | Saturday 28 March 2026 02:13:45 +0000 (0:00:00.297) 0:04:07.747 ******** 2026-03-28 02:14:49.754163 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.754169 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.754176 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.754182 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.754189 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.754195 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.754202 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.754208 | orchestrator | 2026-03-28 02:14:49.754216 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-28 02:14:49.754222 | orchestrator | Saturday 28 March 2026 02:13:50 +0000 (0:00:05.679) 0:04:13.426 ******** 2026-03-28 02:14:49.754231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:14:49.754240 | orchestrator | 2026-03-28 02:14:49.754247 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-28 02:14:49.754254 | orchestrator | Saturday 28 March 2026 02:13:51 +0000 (0:00:00.380) 0:04:13.807 ******** 2026-03-28 02:14:49.754261 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754267 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-28 02:14:49.754274 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754281 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:14:49.754287 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-28 02:14:49.754308 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754316 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:14:49.754323 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-28 02:14:49.754330 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754337 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-28 02:14:49.754343 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:14:49.754350 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754357 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:14:49.754364 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-28 02:14:49.754386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:14:49.754392 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754412 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-28 02:14:49.754419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:14:49.754425 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-28 02:14:49.754432 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-28 02:14:49.754439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:14:49.754445 | orchestrator | 2026-03-28 02:14:49.754451 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-28 02:14:49.754458 | orchestrator | Saturday 28 March 2026 02:13:51 +0000 (0:00:00.349) 0:04:14.157 ******** 2026-03-28 02:14:49.754464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:14:49.754470 | orchestrator | 2026-03-28 02:14:49.754477 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-28 02:14:49.754490 | orchestrator | Saturday 28 March 2026 02:13:52 +0000 (0:00:00.408) 0:04:14.566 ******** 2026-03-28 02:14:49.754495 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-28 02:14:49.754502 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:14:49.754508 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-28 02:14:49.754515 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-28 02:14:49.754521 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:14:49.754527 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-28 02:14:49.754533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:14:49.754540 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-28 02:14:49.754547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:14:49.754552 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:14:49.754558 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-28 02:14:49.754565 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:14:49.754571 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-28 02:14:49.754577 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:14:49.754583 | orchestrator | 2026-03-28 02:14:49.754590 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-28 02:14:49.754596 | orchestrator | Saturday 28 March 2026 02:13:52 +0000 (0:00:00.337) 0:04:14.903 ******** 2026-03-28 02:14:49.754603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:14:49.754609 | orchestrator | 2026-03-28 02:14:49.754615 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-28 02:14:49.754622 | orchestrator | Saturday 28 March 2026 02:13:52 +0000 (0:00:00.439) 0:04:15.343 ******** 2026-03-28 02:14:49.754628 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:14:49.754634 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:14:49.754640 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:14:49.754647 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:14:49.754656 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:14:49.754663 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:14:49.754670 | orchestrator | changed: [testbed-manager] 2026-03-28 02:14:49.754676 | orchestrator | 2026-03-28 02:14:49.754681 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-28 02:14:49.754688 | orchestrator | Saturday 28 March 2026 02:14:26 +0000 (0:00:33.753) 0:04:49.096 ******** 2026-03-28 02:14:49.754694 | orchestrator | changed: [testbed-manager] 2026-03-28 02:14:49.754700 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:14:49.754707 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:14:49.754713 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:14:49.754719 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:14:49.754725 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:14:49.754731 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:14:49.754737 | orchestrator | 2026-03-28 02:14:49.754743 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-28 02:14:49.754749 | orchestrator | Saturday 28 March 2026 02:14:34 +0000 (0:00:07.553) 0:04:56.649 ******** 2026-03-28 02:14:49.754756 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:14:49.754762 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:14:49.754768 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:14:49.754774 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:14:49.754781 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:14:49.754787 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:14:49.754793 | orchestrator | changed: [testbed-manager] 2026-03-28 02:14:49.754799 | orchestrator | 2026-03-28 02:14:49.754805 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-28 02:14:49.754816 | orchestrator | Saturday 28 March 2026 02:14:41 +0000 (0:00:07.553) 0:05:04.202 ******** 2026-03-28 02:14:49.754823 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:14:49.754829 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:14:49.754836 | orchestrator | ok: [testbed-manager] 2026-03-28 02:14:49.754842 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:14:49.754848 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:14:49.754854 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:14:49.754860 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:14:49.754866 | orchestrator | 2026-03-28 02:14:49.754873 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-28 02:14:49.754878 | orchestrator | Saturday 28 March 2026 02:14:43 +0000 (0:00:01.815) 0:05:06.018 ******** 2026-03-28 02:14:49.754885 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:14:49.754891 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:14:49.754897 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:14:49.754903 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:14:49.754909 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:14:49.754915 | orchestrator | changed: [testbed-manager] 2026-03-28 02:14:49.754921 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:14:49.754927 | orchestrator | 2026-03-28 02:14:49.754939 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-28 02:15:01.685615 | orchestrator | Saturday 28 March 2026 02:14:49 +0000 (0:00:06.217) 0:05:12.236 ******** 2026-03-28 02:15:01.685764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:15:01.685791 | orchestrator | 2026-03-28 02:15:01.685809 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-28 02:15:01.685852 | orchestrator | Saturday 28 March 2026 02:14:50 +0000 (0:00:00.630) 0:05:12.867 ******** 2026-03-28 02:15:01.685887 | orchestrator | changed: [testbed-manager] 2026-03-28 02:15:01.685908 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:15:01.685926 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:15:01.685944 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:15:01.685963 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:15:01.685982 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:15:01.686000 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:15:01.686012 | orchestrator | 2026-03-28 02:15:01.686079 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-28 02:15:01.686092 | orchestrator | Saturday 28 March 2026 02:14:51 +0000 (0:00:00.779) 0:05:13.647 ******** 2026-03-28 02:15:01.686103 | orchestrator | ok: [testbed-manager] 2026-03-28 02:15:01.686115 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:15:01.686126 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:15:01.686137 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:15:01.686148 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:15:01.686159 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:15:01.686170 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:15:01.686181 | orchestrator | 2026-03-28 02:15:01.686192 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-28 02:15:01.686203 | orchestrator | Saturday 28 March 2026 02:14:52 +0000 (0:00:01.718) 0:05:15.365 ******** 2026-03-28 02:15:01.686214 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:15:01.686225 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:15:01.686236 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:15:01.686247 | orchestrator | changed: [testbed-manager] 2026-03-28 02:15:01.686260 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:15:01.686273 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:15:01.686287 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:15:01.686299 | orchestrator | 2026-03-28 02:15:01.686311 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-28 02:15:01.686324 | orchestrator | Saturday 28 March 2026 02:14:53 +0000 (0:00:00.846) 0:05:16.212 ******** 2026-03-28 02:15:01.686370 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.686417 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.686436 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.686454 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:15:01.686473 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:15:01.686492 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:15:01.686512 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:15:01.686533 | orchestrator | 2026-03-28 02:15:01.686552 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-28 02:15:01.686571 | orchestrator | Saturday 28 March 2026 02:14:53 +0000 (0:00:00.288) 0:05:16.501 ******** 2026-03-28 02:15:01.686588 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.686607 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.686625 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.686660 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:15:01.686680 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:15:01.686698 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:15:01.686717 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:15:01.686734 | orchestrator | 2026-03-28 02:15:01.686753 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-28 02:15:01.686773 | orchestrator | Saturday 28 March 2026 02:14:54 +0000 (0:00:00.422) 0:05:16.924 ******** 2026-03-28 02:15:01.686790 | orchestrator | ok: [testbed-manager] 2026-03-28 02:15:01.686808 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:15:01.686819 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:15:01.686830 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:15:01.686840 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:15:01.686851 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:15:01.686861 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:15:01.686872 | orchestrator | 2026-03-28 02:15:01.686882 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-28 02:15:01.686893 | orchestrator | Saturday 28 March 2026 02:14:54 +0000 (0:00:00.312) 0:05:17.236 ******** 2026-03-28 02:15:01.686904 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.686915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.686925 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.686936 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:15:01.686946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:15:01.686957 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:15:01.686967 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:15:01.686978 | orchestrator | 2026-03-28 02:15:01.686989 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-28 02:15:01.687000 | orchestrator | Saturday 28 March 2026 02:14:55 +0000 (0:00:00.352) 0:05:17.588 ******** 2026-03-28 02:15:01.687011 | orchestrator | ok: [testbed-manager] 2026-03-28 02:15:01.687021 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:15:01.687032 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:15:01.687043 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:15:01.687053 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:15:01.687064 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:15:01.687074 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:15:01.687085 | orchestrator | 2026-03-28 02:15:01.687096 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-28 02:15:01.687107 | orchestrator | Saturday 28 March 2026 02:14:55 +0000 (0:00:00.339) 0:05:17.928 ******** 2026-03-28 02:15:01.687118 | orchestrator | ok: [testbed-manager] =>  2026-03-28 02:15:01.687128 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687139 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 02:15:01.687149 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687159 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 02:15:01.687170 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687180 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 02:15:01.687191 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687220 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 02:15:01.687241 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687251 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 02:15:01.687260 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687269 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 02:15:01.687278 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 02:15:01.687288 | orchestrator | 2026-03-28 02:15:01.687297 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-28 02:15:01.687307 | orchestrator | Saturday 28 March 2026 02:14:55 +0000 (0:00:00.352) 0:05:18.281 ******** 2026-03-28 02:15:01.687316 | orchestrator | ok: [testbed-manager] =>  2026-03-28 02:15:01.687325 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687335 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 02:15:01.687344 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687353 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 02:15:01.687362 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687372 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 02:15:01.687381 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687419 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 02:15:01.687428 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687437 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 02:15:01.687447 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687456 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 02:15:01.687466 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 02:15:01.687475 | orchestrator | 2026-03-28 02:15:01.687485 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-28 02:15:01.687494 | orchestrator | Saturday 28 March 2026 02:14:56 +0000 (0:00:00.313) 0:05:18.594 ******** 2026-03-28 02:15:01.687504 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.687513 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.687522 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.687532 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:15:01.687541 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:15:01.687550 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:15:01.687560 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:15:01.687569 | orchestrator | 2026-03-28 02:15:01.687578 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-28 02:15:01.687588 | orchestrator | Saturday 28 March 2026 02:14:56 +0000 (0:00:00.269) 0:05:18.863 ******** 2026-03-28 02:15:01.687597 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.687607 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.687616 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.687625 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:15:01.687635 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:15:01.687644 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:15:01.687653 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:15:01.687663 | orchestrator | 2026-03-28 02:15:01.687672 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-28 02:15:01.687682 | orchestrator | Saturday 28 March 2026 02:14:56 +0000 (0:00:00.284) 0:05:19.148 ******** 2026-03-28 02:15:01.687693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:15:01.687705 | orchestrator | 2026-03-28 02:15:01.687721 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-28 02:15:01.687731 | orchestrator | Saturday 28 March 2026 02:14:57 +0000 (0:00:00.423) 0:05:19.572 ******** 2026-03-28 02:15:01.687740 | orchestrator | ok: [testbed-manager] 2026-03-28 02:15:01.687750 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:15:01.687759 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:15:01.687769 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:15:01.687778 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:15:01.687794 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:15:01.687803 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:15:01.687813 | orchestrator | 2026-03-28 02:15:01.687822 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-28 02:15:01.687832 | orchestrator | Saturday 28 March 2026 02:14:58 +0000 (0:00:01.012) 0:05:20.585 ******** 2026-03-28 02:15:01.687841 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:15:01.687851 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:15:01.687867 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:15:01.687882 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:15:01.687898 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:15:01.687914 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:15:01.687929 | orchestrator | ok: [testbed-manager] 2026-03-28 02:15:01.687945 | orchestrator | 2026-03-28 02:15:01.687961 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-28 02:15:01.687979 | orchestrator | Saturday 28 March 2026 02:15:01 +0000 (0:00:03.066) 0:05:23.651 ******** 2026-03-28 02:15:01.687996 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-28 02:15:01.688014 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-28 02:15:01.688034 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-28 02:15:01.688050 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-28 02:15:01.688066 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-28 02:15:01.688082 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-28 02:15:01.688098 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:15:01.688116 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-28 02:15:01.688133 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-28 02:15:01.688148 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-28 02:15:01.688164 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:15:01.688179 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-28 02:15:01.688195 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-28 02:15:01.688211 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-28 02:15:01.688227 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:15:01.688244 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-28 02:15:01.688273 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-28 02:16:03.319428 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:03.319593 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-28 02:16:03.319610 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-28 02:16:03.319620 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-28 02:16:03.319629 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-28 02:16:03.319638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:03.319647 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:03.319655 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-28 02:16:03.319664 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-28 02:16:03.319673 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-28 02:16:03.319682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:03.319691 | orchestrator | 2026-03-28 02:16:03.319701 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-28 02:16:03.319710 | orchestrator | Saturday 28 March 2026 02:15:01 +0000 (0:00:00.691) 0:05:24.344 ******** 2026-03-28 02:16:03.319719 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.319728 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.319736 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.319745 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.319754 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.319763 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.319772 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.319801 | orchestrator | 2026-03-28 02:16:03.319810 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-28 02:16:03.319819 | orchestrator | Saturday 28 March 2026 02:15:08 +0000 (0:00:06.579) 0:05:30.924 ******** 2026-03-28 02:16:03.319828 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.319836 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.319844 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.319853 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.319861 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.319870 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.319878 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.319887 | orchestrator | 2026-03-28 02:16:03.319895 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-28 02:16:03.319904 | orchestrator | Saturday 28 March 2026 02:15:09 +0000 (0:00:00.963) 0:05:31.887 ******** 2026-03-28 02:16:03.319913 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.319921 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.319929 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.319938 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.319946 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.319955 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.319963 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.319972 | orchestrator | 2026-03-28 02:16:03.319982 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-28 02:16:03.319991 | orchestrator | Saturday 28 March 2026 02:15:17 +0000 (0:00:08.028) 0:05:39.916 ******** 2026-03-28 02:16:03.320001 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320011 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:03.320020 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320031 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320041 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320051 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320062 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320072 | orchestrator | 2026-03-28 02:16:03.320081 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-28 02:16:03.320091 | orchestrator | Saturday 28 March 2026 02:15:21 +0000 (0:00:03.638) 0:05:43.554 ******** 2026-03-28 02:16:03.320101 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.320111 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320121 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320131 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320141 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320151 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320161 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320170 | orchestrator | 2026-03-28 02:16:03.320181 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-28 02:16:03.320191 | orchestrator | Saturday 28 March 2026 02:15:22 +0000 (0:00:01.313) 0:05:44.867 ******** 2026-03-28 02:16:03.320201 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.320210 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320220 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320230 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320239 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320249 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320259 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320270 | orchestrator | 2026-03-28 02:16:03.320297 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-28 02:16:03.320318 | orchestrator | Saturday 28 March 2026 02:15:23 +0000 (0:00:01.615) 0:05:46.483 ******** 2026-03-28 02:16:03.320335 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:03.320350 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:03.320365 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:03.320381 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:03.320406 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:03.320422 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:03.320432 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:03.320440 | orchestrator | 2026-03-28 02:16:03.320449 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-28 02:16:03.320457 | orchestrator | Saturday 28 March 2026 02:15:24 +0000 (0:00:00.627) 0:05:47.110 ******** 2026-03-28 02:16:03.320466 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.320474 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320483 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320511 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320522 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320530 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320539 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320547 | orchestrator | 2026-03-28 02:16:03.320556 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-28 02:16:03.320580 | orchestrator | Saturday 28 March 2026 02:15:34 +0000 (0:00:09.704) 0:05:56.815 ******** 2026-03-28 02:16:03.320590 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:03.320598 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320606 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320615 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320623 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320632 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320640 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320649 | orchestrator | 2026-03-28 02:16:03.320658 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-28 02:16:03.320674 | orchestrator | Saturday 28 March 2026 02:15:35 +0000 (0:00:01.016) 0:05:57.831 ******** 2026-03-28 02:16:03.320689 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.320703 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320716 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320730 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320745 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320760 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320773 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320788 | orchestrator | 2026-03-28 02:16:03.320797 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-28 02:16:03.320809 | orchestrator | Saturday 28 March 2026 02:15:44 +0000 (0:00:09.312) 0:06:07.144 ******** 2026-03-28 02:16:03.320823 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.320838 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.320853 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.320868 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.320883 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.320898 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.320913 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.320926 | orchestrator | 2026-03-28 02:16:03.320939 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-28 02:16:03.320948 | orchestrator | Saturday 28 March 2026 02:15:57 +0000 (0:00:12.443) 0:06:19.588 ******** 2026-03-28 02:16:03.320957 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-28 02:16:03.320965 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-28 02:16:03.320974 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-28 02:16:03.320983 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-28 02:16:03.320991 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-28 02:16:03.321000 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-28 02:16:03.321009 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-28 02:16:03.321017 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-28 02:16:03.321026 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-28 02:16:03.321034 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-28 02:16:03.321050 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-28 02:16:03.321116 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-28 02:16:03.321133 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-28 02:16:03.321148 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-28 02:16:03.321162 | orchestrator | 2026-03-28 02:16:03.321176 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-28 02:16:03.321185 | orchestrator | Saturday 28 March 2026 02:15:58 +0000 (0:00:01.172) 0:06:20.761 ******** 2026-03-28 02:16:03.321198 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:03.321206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:03.321215 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:03.321223 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:03.321237 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:03.321252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:03.321266 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:03.321280 | orchestrator | 2026-03-28 02:16:03.321295 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-28 02:16:03.321310 | orchestrator | Saturday 28 March 2026 02:15:58 +0000 (0:00:00.595) 0:06:21.356 ******** 2026-03-28 02:16:03.321325 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:03.321340 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:03.321354 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:03.321369 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:03.321384 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:03.321398 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:03.321413 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:03.321428 | orchestrator | 2026-03-28 02:16:03.321442 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-28 02:16:03.321460 | orchestrator | Saturday 28 March 2026 02:16:02 +0000 (0:00:03.376) 0:06:24.733 ******** 2026-03-28 02:16:03.321475 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:03.321490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:03.321525 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:03.321534 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:03.321542 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:03.321550 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:03.321559 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:03.321567 | orchestrator | 2026-03-28 02:16:03.321576 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-28 02:16:03.321585 | orchestrator | Saturday 28 March 2026 02:16:02 +0000 (0:00:00.549) 0:06:25.283 ******** 2026-03-28 02:16:03.321594 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-28 02:16:03.321603 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-28 02:16:03.321616 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:03.321630 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-28 02:16:03.321645 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-28 02:16:03.321660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:03.321674 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-28 02:16:03.321690 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-28 02:16:03.321704 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:03.321731 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-28 02:16:23.211403 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-28 02:16:23.212422 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:23.212485 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-28 02:16:23.212505 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-28 02:16:23.212583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:23.212631 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-28 02:16:23.212643 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-28 02:16:23.212654 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:23.212665 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-28 02:16:23.212676 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-28 02:16:23.212687 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:23.212699 | orchestrator | 2026-03-28 02:16:23.212712 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-28 02:16:23.212725 | orchestrator | Saturday 28 March 2026 02:16:03 +0000 (0:00:00.792) 0:06:26.076 ******** 2026-03-28 02:16:23.212736 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:23.212746 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:23.212757 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:23.212768 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:23.212779 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:23.212790 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:23.212800 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:23.212811 | orchestrator | 2026-03-28 02:16:23.212822 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-28 02:16:23.212833 | orchestrator | Saturday 28 March 2026 02:16:04 +0000 (0:00:00.528) 0:06:26.604 ******** 2026-03-28 02:16:23.212844 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:23.212854 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:23.212865 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:23.212875 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:23.212886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:23.212897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:23.212907 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:23.212918 | orchestrator | 2026-03-28 02:16:23.212942 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-28 02:16:23.212953 | orchestrator | Saturday 28 March 2026 02:16:04 +0000 (0:00:00.516) 0:06:27.121 ******** 2026-03-28 02:16:23.212964 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:23.212975 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:23.212986 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:23.212996 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:23.213007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:23.213017 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:23.213028 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:23.213038 | orchestrator | 2026-03-28 02:16:23.213050 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-28 02:16:23.213060 | orchestrator | Saturday 28 March 2026 02:16:05 +0000 (0:00:00.530) 0:06:27.651 ******** 2026-03-28 02:16:23.213071 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.213082 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.213093 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.213103 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.213114 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.213125 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.213135 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.213146 | orchestrator | 2026-03-28 02:16:23.213157 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-28 02:16:23.213168 | orchestrator | Saturday 28 March 2026 02:16:06 +0000 (0:00:01.794) 0:06:29.446 ******** 2026-03-28 02:16:23.213180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:16:23.213193 | orchestrator | 2026-03-28 02:16:23.213204 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-28 02:16:23.213215 | orchestrator | Saturday 28 March 2026 02:16:07 +0000 (0:00:00.895) 0:06:30.342 ******** 2026-03-28 02:16:23.213241 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.213253 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:23.213263 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:23.213274 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:23.213285 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:23.213295 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:23.213306 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:23.213316 | orchestrator | 2026-03-28 02:16:23.213327 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-28 02:16:23.213338 | orchestrator | Saturday 28 March 2026 02:16:08 +0000 (0:00:00.871) 0:06:31.213 ******** 2026-03-28 02:16:23.213349 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.213360 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:23.213468 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:23.213483 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:23.213494 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:23.213504 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:23.213515 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:23.213554 | orchestrator | 2026-03-28 02:16:23.213565 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-28 02:16:23.213577 | orchestrator | Saturday 28 March 2026 02:16:09 +0000 (0:00:01.071) 0:06:32.285 ******** 2026-03-28 02:16:23.213587 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.213598 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:23.213609 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:23.213620 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:23.213630 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:23.213641 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:23.213651 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:23.213662 | orchestrator | 2026-03-28 02:16:23.213673 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-28 02:16:23.213707 | orchestrator | Saturday 28 March 2026 02:16:11 +0000 (0:00:01.644) 0:06:33.930 ******** 2026-03-28 02:16:23.213718 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:23.213730 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.213805 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.213817 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.213828 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.213839 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.213850 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.213860 | orchestrator | 2026-03-28 02:16:23.213871 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-28 02:16:23.213882 | orchestrator | Saturday 28 March 2026 02:16:12 +0000 (0:00:01.417) 0:06:35.347 ******** 2026-03-28 02:16:23.213893 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.213904 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:23.213948 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:23.213959 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:23.213970 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:23.213981 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:23.213991 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:23.214002 | orchestrator | 2026-03-28 02:16:23.214013 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-28 02:16:23.214087 | orchestrator | Saturday 28 March 2026 02:16:14 +0000 (0:00:01.326) 0:06:36.674 ******** 2026-03-28 02:16:23.214098 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:23.214109 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:23.214119 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:23.214130 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:23.214141 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:23.214151 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:23.214162 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:23.214173 | orchestrator | 2026-03-28 02:16:23.214195 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-28 02:16:23.214206 | orchestrator | Saturday 28 March 2026 02:16:15 +0000 (0:00:01.443) 0:06:38.117 ******** 2026-03-28 02:16:23.214217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:16:23.214243 | orchestrator | 2026-03-28 02:16:23.214254 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-28 02:16:23.214265 | orchestrator | Saturday 28 March 2026 02:16:16 +0000 (0:00:01.074) 0:06:39.192 ******** 2026-03-28 02:16:23.214298 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.214309 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.214331 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.214343 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.214353 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.214364 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.214375 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.214386 | orchestrator | 2026-03-28 02:16:23.214396 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-28 02:16:23.214408 | orchestrator | Saturday 28 March 2026 02:16:18 +0000 (0:00:01.464) 0:06:40.656 ******** 2026-03-28 02:16:23.214418 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.214429 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.214439 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.214450 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.214461 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.214486 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.214497 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.214508 | orchestrator | 2026-03-28 02:16:23.214537 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-28 02:16:23.214548 | orchestrator | Saturday 28 March 2026 02:16:19 +0000 (0:00:01.151) 0:06:41.808 ******** 2026-03-28 02:16:23.214559 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.214570 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.214589 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.214608 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.214626 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.214646 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.214665 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.214683 | orchestrator | 2026-03-28 02:16:23.214694 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-28 02:16:23.214705 | orchestrator | Saturday 28 March 2026 02:16:20 +0000 (0:00:01.142) 0:06:42.950 ******** 2026-03-28 02:16:23.214716 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:23.214726 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:23.214737 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:23.214748 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:23.214758 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:23.214769 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:23.214779 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:23.214790 | orchestrator | 2026-03-28 02:16:23.214800 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-28 02:16:23.214811 | orchestrator | Saturday 28 March 2026 02:16:21 +0000 (0:00:01.480) 0:06:44.431 ******** 2026-03-28 02:16:23.214822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:16:23.214834 | orchestrator | 2026-03-28 02:16:23.214844 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:23.214855 | orchestrator | Saturday 28 March 2026 02:16:22 +0000 (0:00:00.942) 0:06:45.373 ******** 2026-03-28 02:16:23.214866 | orchestrator | 2026-03-28 02:16:23.214876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:23.214896 | orchestrator | Saturday 28 March 2026 02:16:22 +0000 (0:00:00.046) 0:06:45.420 ******** 2026-03-28 02:16:23.214976 | orchestrator | 2026-03-28 02:16:23.214989 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:23.215000 | orchestrator | Saturday 28 March 2026 02:16:22 +0000 (0:00:00.045) 0:06:45.465 ******** 2026-03-28 02:16:23.215011 | orchestrator | 2026-03-28 02:16:23.215022 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:23.215045 | orchestrator | Saturday 28 March 2026 02:16:23 +0000 (0:00:00.048) 0:06:45.514 ******** 2026-03-28 02:16:50.821366 | orchestrator | 2026-03-28 02:16:50.821492 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:50.821510 | orchestrator | Saturday 28 March 2026 02:16:23 +0000 (0:00:00.040) 0:06:45.555 ******** 2026-03-28 02:16:50.821520 | orchestrator | 2026-03-28 02:16:50.821531 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:50.821541 | orchestrator | Saturday 28 March 2026 02:16:23 +0000 (0:00:00.040) 0:06:45.595 ******** 2026-03-28 02:16:50.821551 | orchestrator | 2026-03-28 02:16:50.821627 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 02:16:50.821637 | orchestrator | Saturday 28 March 2026 02:16:23 +0000 (0:00:00.048) 0:06:45.643 ******** 2026-03-28 02:16:50.821647 | orchestrator | 2026-03-28 02:16:50.821657 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 02:16:50.821667 | orchestrator | Saturday 28 March 2026 02:16:23 +0000 (0:00:00.040) 0:06:45.684 ******** 2026-03-28 02:16:50.821677 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:50.821688 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:50.821698 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:50.821707 | orchestrator | 2026-03-28 02:16:50.821717 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-28 02:16:50.821727 | orchestrator | Saturday 28 March 2026 02:16:24 +0000 (0:00:01.198) 0:06:46.882 ******** 2026-03-28 02:16:50.821737 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:50.821748 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:50.821758 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:50.821768 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:50.821777 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:50.821787 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:50.821797 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:50.821806 | orchestrator | 2026-03-28 02:16:50.821816 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-28 02:16:50.821826 | orchestrator | Saturday 28 March 2026 02:16:26 +0000 (0:00:01.626) 0:06:48.509 ******** 2026-03-28 02:16:50.821836 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:50.821846 | orchestrator | changed: [testbed-manager] 2026-03-28 02:16:50.821856 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:50.821865 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:50.821880 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:50.821901 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:50.821925 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:50.821941 | orchestrator | 2026-03-28 02:16:50.821956 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-28 02:16:50.821974 | orchestrator | Saturday 28 March 2026 02:16:27 +0000 (0:00:01.211) 0:06:49.720 ******** 2026-03-28 02:16:50.821989 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:50.822005 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:50.822084 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:50.822102 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:50.822118 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:50.822133 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:50.822148 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:50.822164 | orchestrator | 2026-03-28 02:16:50.822181 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-28 02:16:50.822198 | orchestrator | Saturday 28 March 2026 02:16:29 +0000 (0:00:02.547) 0:06:52.268 ******** 2026-03-28 02:16:50.822245 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:50.822264 | orchestrator | 2026-03-28 02:16:50.822297 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-28 02:16:50.822312 | orchestrator | Saturday 28 March 2026 02:16:29 +0000 (0:00:00.111) 0:06:52.380 ******** 2026-03-28 02:16:50.822326 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.822338 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:50.822352 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:50.822365 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:50.822379 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:16:50.822392 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:50.822407 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:50.822420 | orchestrator | 2026-03-28 02:16:50.822433 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-28 02:16:50.822447 | orchestrator | Saturday 28 March 2026 02:16:30 +0000 (0:00:01.004) 0:06:53.385 ******** 2026-03-28 02:16:50.822460 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:50.822473 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:50.822486 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:50.822500 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:50.822514 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:50.822528 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:50.822544 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:50.822588 | orchestrator | 2026-03-28 02:16:50.822604 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-28 02:16:50.822620 | orchestrator | Saturday 28 March 2026 02:16:31 +0000 (0:00:00.571) 0:06:53.956 ******** 2026-03-28 02:16:50.822637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:16:50.822656 | orchestrator | 2026-03-28 02:16:50.822671 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-28 02:16:50.822686 | orchestrator | Saturday 28 March 2026 02:16:32 +0000 (0:00:01.133) 0:06:55.090 ******** 2026-03-28 02:16:50.822700 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.822715 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:50.822730 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:50.822746 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:50.822761 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:50.822778 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:50.822793 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:50.822808 | orchestrator | 2026-03-28 02:16:50.822822 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-28 02:16:50.822836 | orchestrator | Saturday 28 March 2026 02:16:33 +0000 (0:00:00.870) 0:06:55.960 ******** 2026-03-28 02:16:50.822851 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-28 02:16:50.822894 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-28 02:16:50.822911 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-28 02:16:50.822925 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-28 02:16:50.822940 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-28 02:16:50.822954 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-28 02:16:50.822969 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-28 02:16:50.822983 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-28 02:16:50.822997 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-28 02:16:50.823010 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-28 02:16:50.823023 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-28 02:16:50.823038 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-28 02:16:50.823068 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-28 02:16:50.823083 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-28 02:16:50.823097 | orchestrator | 2026-03-28 02:16:50.823110 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-28 02:16:50.823123 | orchestrator | Saturday 28 March 2026 02:16:36 +0000 (0:00:02.618) 0:06:58.580 ******** 2026-03-28 02:16:50.823137 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:50.823151 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:50.823166 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:50.823182 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:50.823197 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:50.823212 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:50.823228 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:50.823243 | orchestrator | 2026-03-28 02:16:50.823259 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-28 02:16:50.823275 | orchestrator | Saturday 28 March 2026 02:16:36 +0000 (0:00:00.774) 0:06:59.355 ******** 2026-03-28 02:16:50.823293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:16:50.823312 | orchestrator | 2026-03-28 02:16:50.823329 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-28 02:16:50.823346 | orchestrator | Saturday 28 March 2026 02:16:37 +0000 (0:00:00.853) 0:07:00.208 ******** 2026-03-28 02:16:50.823362 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.823379 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:50.823394 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:50.823411 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:50.823427 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:50.823444 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:50.823460 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:50.823476 | orchestrator | 2026-03-28 02:16:50.823494 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-28 02:16:50.823509 | orchestrator | Saturday 28 March 2026 02:16:38 +0000 (0:00:00.965) 0:07:01.173 ******** 2026-03-28 02:16:50.823537 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.823587 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:50.823604 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:50.823618 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:50.823633 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:50.823648 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:50.823663 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:50.823679 | orchestrator | 2026-03-28 02:16:50.823694 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-28 02:16:50.823709 | orchestrator | Saturday 28 March 2026 02:16:39 +0000 (0:00:01.114) 0:07:02.288 ******** 2026-03-28 02:16:50.823726 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:50.823744 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:50.823759 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:50.823775 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:50.823789 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:50.823798 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:50.823807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:50.823817 | orchestrator | 2026-03-28 02:16:50.823827 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-28 02:16:50.823836 | orchestrator | Saturday 28 March 2026 02:16:40 +0000 (0:00:00.525) 0:07:02.813 ******** 2026-03-28 02:16:50.823846 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.823855 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:16:50.823872 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:16:50.823887 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:16:50.823903 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:16:50.823932 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:16:50.823947 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:16:50.823962 | orchestrator | 2026-03-28 02:16:50.823978 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-28 02:16:50.823992 | orchestrator | Saturday 28 March 2026 02:16:41 +0000 (0:00:01.573) 0:07:04.386 ******** 2026-03-28 02:16:50.824007 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:16:50.824023 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:16:50.824038 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:16:50.824053 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:16:50.824068 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:16:50.824083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:16:50.824096 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:16:50.824109 | orchestrator | 2026-03-28 02:16:50.824123 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-28 02:16:50.824137 | orchestrator | Saturday 28 March 2026 02:16:42 +0000 (0:00:00.490) 0:07:04.877 ******** 2026-03-28 02:16:50.824152 | orchestrator | ok: [testbed-manager] 2026-03-28 02:16:50.824167 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:16:50.824180 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:16:50.824194 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:16:50.824209 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:16:50.824223 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:16:50.824256 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:24.075199 | orchestrator | 2026-03-28 02:17:24.075348 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-28 02:17:24.075364 | orchestrator | Saturday 28 March 2026 02:16:50 +0000 (0:00:08.426) 0:07:13.304 ******** 2026-03-28 02:17:24.075372 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.075379 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:24.075387 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:24.075394 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:24.075400 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:24.075407 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:24.075414 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:24.075421 | orchestrator | 2026-03-28 02:17:24.075428 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-28 02:17:24.075435 | orchestrator | Saturday 28 March 2026 02:16:52 +0000 (0:00:01.752) 0:07:15.056 ******** 2026-03-28 02:17:24.075441 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.075450 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:24.075461 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:24.075473 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:24.075484 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:24.075495 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:24.075504 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:24.075515 | orchestrator | 2026-03-28 02:17:24.075527 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-28 02:17:24.075539 | orchestrator | Saturday 28 March 2026 02:16:54 +0000 (0:00:01.923) 0:07:16.980 ******** 2026-03-28 02:17:24.075550 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.075561 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:24.075571 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:24.075577 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:24.075584 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:24.075591 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:24.075642 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:24.075653 | orchestrator | 2026-03-28 02:17:24.075664 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 02:17:24.075676 | orchestrator | Saturday 28 March 2026 02:16:56 +0000 (0:00:01.685) 0:07:18.666 ******** 2026-03-28 02:17:24.075686 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.075697 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.075708 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.075747 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.075761 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.075773 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.075785 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.075795 | orchestrator | 2026-03-28 02:17:24.075803 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 02:17:24.075811 | orchestrator | Saturday 28 March 2026 02:16:57 +0000 (0:00:00.917) 0:07:19.583 ******** 2026-03-28 02:17:24.075819 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:17:24.075828 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:17:24.075865 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:17:24.075873 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:17:24.075881 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:17:24.075888 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:17:24.075896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:17:24.075903 | orchestrator | 2026-03-28 02:17:24.075911 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-28 02:17:24.075919 | orchestrator | Saturday 28 March 2026 02:16:58 +0000 (0:00:01.043) 0:07:20.627 ******** 2026-03-28 02:17:24.075927 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:17:24.075935 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:17:24.075943 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:17:24.075950 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:17:24.075958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:17:24.075966 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:17:24.075974 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:17:24.075981 | orchestrator | 2026-03-28 02:17:24.075989 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-28 02:17:24.075996 | orchestrator | Saturday 28 March 2026 02:16:58 +0000 (0:00:00.540) 0:07:21.167 ******** 2026-03-28 02:17:24.076004 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076027 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076035 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076043 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076050 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076058 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076065 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076073 | orchestrator | 2026-03-28 02:17:24.076081 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-28 02:17:24.076089 | orchestrator | Saturday 28 March 2026 02:16:59 +0000 (0:00:00.573) 0:07:21.741 ******** 2026-03-28 02:17:24.076096 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076103 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076109 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076117 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076123 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076130 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076136 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076143 | orchestrator | 2026-03-28 02:17:24.076150 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-28 02:17:24.076156 | orchestrator | Saturday 28 March 2026 02:16:59 +0000 (0:00:00.579) 0:07:22.320 ******** 2026-03-28 02:17:24.076163 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076170 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076176 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076182 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076189 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076195 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076202 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076208 | orchestrator | 2026-03-28 02:17:24.076215 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-28 02:17:24.076222 | orchestrator | Saturday 28 March 2026 02:17:00 +0000 (0:00:00.780) 0:07:23.101 ******** 2026-03-28 02:17:24.076228 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076235 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076249 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076255 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076262 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076268 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076275 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076281 | orchestrator | 2026-03-28 02:17:24.076304 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-28 02:17:24.076311 | orchestrator | Saturday 28 March 2026 02:17:05 +0000 (0:00:05.326) 0:07:28.428 ******** 2026-03-28 02:17:24.076318 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:17:24.076324 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:17:24.076331 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:17:24.076337 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:17:24.076344 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:17:24.076350 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:17:24.076357 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:17:24.076364 | orchestrator | 2026-03-28 02:17:24.076370 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-28 02:17:24.076377 | orchestrator | Saturday 28 March 2026 02:17:06 +0000 (0:00:00.553) 0:07:28.981 ******** 2026-03-28 02:17:24.076385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:24.076395 | orchestrator | 2026-03-28 02:17:24.076404 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-28 02:17:24.076416 | orchestrator | Saturday 28 March 2026 02:17:07 +0000 (0:00:01.015) 0:07:29.997 ******** 2026-03-28 02:17:24.076453 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076465 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076476 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076487 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076498 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076509 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076519 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076530 | orchestrator | 2026-03-28 02:17:24.076537 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-28 02:17:24.076544 | orchestrator | Saturday 28 March 2026 02:17:09 +0000 (0:00:01.765) 0:07:31.762 ******** 2026-03-28 02:17:24.076551 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076557 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076563 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076570 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076576 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076583 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076589 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076635 | orchestrator | 2026-03-28 02:17:24.076642 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-28 02:17:24.076649 | orchestrator | Saturday 28 March 2026 02:17:10 +0000 (0:00:01.137) 0:07:32.900 ******** 2026-03-28 02:17:24.076656 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:24.076662 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:24.076669 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:24.076675 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:24.076682 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:24.076688 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:24.076695 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:24.076701 | orchestrator | 2026-03-28 02:17:24.076708 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-28 02:17:24.076715 | orchestrator | Saturday 28 March 2026 02:17:11 +0000 (0:00:00.862) 0:07:33.762 ******** 2026-03-28 02:17:24.076727 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076735 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076755 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076762 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076769 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076775 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076782 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 02:17:24.076789 | orchestrator | 2026-03-28 02:17:24.076795 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-28 02:17:24.076802 | orchestrator | Saturday 28 March 2026 02:17:13 +0000 (0:00:02.027) 0:07:35.789 ******** 2026-03-28 02:17:24.076809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:24.076816 | orchestrator | 2026-03-28 02:17:24.076823 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-28 02:17:24.076829 | orchestrator | Saturday 28 March 2026 02:17:14 +0000 (0:00:00.825) 0:07:36.615 ******** 2026-03-28 02:17:24.076836 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:24.076950 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:24.076967 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:24.076978 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:24.077020 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:24.077034 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:24.077046 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:24.077056 | orchestrator | 2026-03-28 02:17:24.077079 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-28 02:17:54.930300 | orchestrator | Saturday 28 March 2026 02:17:24 +0000 (0:00:09.944) 0:07:46.560 ******** 2026-03-28 02:17:54.930414 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:54.930433 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:54.930446 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:54.930458 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:54.930471 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:54.930483 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:54.930495 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:54.930508 | orchestrator | 2026-03-28 02:17:54.930521 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-28 02:17:54.930534 | orchestrator | Saturday 28 March 2026 02:17:26 +0000 (0:00:02.080) 0:07:48.641 ******** 2026-03-28 02:17:54.930547 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:54.930561 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:54.930574 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:54.930587 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:54.930600 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:54.930613 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:54.930625 | orchestrator | 2026-03-28 02:17:54.930691 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-28 02:17:54.930704 | orchestrator | Saturday 28 March 2026 02:17:27 +0000 (0:00:01.328) 0:07:49.969 ******** 2026-03-28 02:17:54.930716 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.930729 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.930741 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.930753 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.930765 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.930809 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.930825 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.930836 | orchestrator | 2026-03-28 02:17:54.930848 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-28 02:17:54.930860 | orchestrator | 2026-03-28 02:17:54.930872 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-28 02:17:54.930885 | orchestrator | Saturday 28 March 2026 02:17:28 +0000 (0:00:01.294) 0:07:51.263 ******** 2026-03-28 02:17:54.930897 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:17:54.930909 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:17:54.930922 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:17:54.930935 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:17:54.930948 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:17:54.930960 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:17:54.930971 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:17:54.930983 | orchestrator | 2026-03-28 02:17:54.930995 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-28 02:17:54.931007 | orchestrator | 2026-03-28 02:17:54.931021 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-28 02:17:54.931035 | orchestrator | Saturday 28 March 2026 02:17:29 +0000 (0:00:00.728) 0:07:51.992 ******** 2026-03-28 02:17:54.931048 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931060 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931073 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931084 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931093 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931101 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931109 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931116 | orchestrator | 2026-03-28 02:17:54.931125 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-28 02:17:54.931147 | orchestrator | Saturday 28 March 2026 02:17:30 +0000 (0:00:01.319) 0:07:53.312 ******** 2026-03-28 02:17:54.931155 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:54.931164 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:54.931172 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:54.931180 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:54.931188 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:54.931196 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:54.931203 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:54.931210 | orchestrator | 2026-03-28 02:17:54.931218 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-28 02:17:54.931225 | orchestrator | Saturday 28 March 2026 02:17:32 +0000 (0:00:01.320) 0:07:54.632 ******** 2026-03-28 02:17:54.931232 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:17:54.931239 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:17:54.931246 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:17:54.931253 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:17:54.931260 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:17:54.931267 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:17:54.931274 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:17:54.931282 | orchestrator | 2026-03-28 02:17:54.931289 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-28 02:17:54.931296 | orchestrator | Saturday 28 March 2026 02:17:32 +0000 (0:00:00.458) 0:07:55.091 ******** 2026-03-28 02:17:54.931305 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:54.931314 | orchestrator | 2026-03-28 02:17:54.931321 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-28 02:17:54.931328 | orchestrator | Saturday 28 March 2026 02:17:33 +0000 (0:00:00.850) 0:07:55.941 ******** 2026-03-28 02:17:54.931338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:54.931357 | orchestrator | 2026-03-28 02:17:54.931364 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-28 02:17:54.931372 | orchestrator | Saturday 28 March 2026 02:17:34 +0000 (0:00:00.733) 0:07:56.674 ******** 2026-03-28 02:17:54.931379 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931386 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931393 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931400 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931407 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931414 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931421 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931428 | orchestrator | 2026-03-28 02:17:54.931452 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-28 02:17:54.931460 | orchestrator | Saturday 28 March 2026 02:17:42 +0000 (0:00:08.707) 0:08:05.382 ******** 2026-03-28 02:17:54.931467 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931474 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931481 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931488 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931496 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931502 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931509 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931516 | orchestrator | 2026-03-28 02:17:54.931524 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-28 02:17:54.931531 | orchestrator | Saturday 28 March 2026 02:17:43 +0000 (0:00:01.101) 0:08:06.484 ******** 2026-03-28 02:17:54.931538 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931545 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931552 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931559 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931566 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931573 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931580 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931587 | orchestrator | 2026-03-28 02:17:54.931594 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-28 02:17:54.931601 | orchestrator | Saturday 28 March 2026 02:17:45 +0000 (0:00:01.418) 0:08:07.903 ******** 2026-03-28 02:17:54.931608 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931615 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931623 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931671 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931680 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931687 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931694 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931701 | orchestrator | 2026-03-28 02:17:54.931708 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-28 02:17:54.931715 | orchestrator | Saturday 28 March 2026 02:17:47 +0000 (0:00:01.945) 0:08:09.848 ******** 2026-03-28 02:17:54.931723 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931733 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931745 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931756 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931766 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931776 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931787 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931799 | orchestrator | 2026-03-28 02:17:54.931811 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-28 02:17:54.931823 | orchestrator | Saturday 28 March 2026 02:17:48 +0000 (0:00:01.260) 0:08:11.109 ******** 2026-03-28 02:17:54.931836 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.931847 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.931866 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.931874 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.931881 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.931889 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.931900 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.931912 | orchestrator | 2026-03-28 02:17:54.931923 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-28 02:17:54.931934 | orchestrator | 2026-03-28 02:17:54.931953 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-28 02:17:54.931967 | orchestrator | Saturday 28 March 2026 02:17:49 +0000 (0:00:01.165) 0:08:12.274 ******** 2026-03-28 02:17:54.931979 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:54.931991 | orchestrator | 2026-03-28 02:17:54.932003 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 02:17:54.932015 | orchestrator | Saturday 28 March 2026 02:17:50 +0000 (0:00:00.840) 0:08:13.114 ******** 2026-03-28 02:17:54.932026 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:54.932038 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:54.932049 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:54.932060 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:54.932072 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:54.932084 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:54.932096 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:54.932108 | orchestrator | 2026-03-28 02:17:54.932121 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 02:17:54.932133 | orchestrator | Saturday 28 March 2026 02:17:51 +0000 (0:00:01.106) 0:08:14.221 ******** 2026-03-28 02:17:54.932146 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:54.932158 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:54.932165 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:54.932172 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:54.932180 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:54.932186 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:54.932194 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:54.932201 | orchestrator | 2026-03-28 02:17:54.932208 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-28 02:17:54.932215 | orchestrator | Saturday 28 March 2026 02:17:53 +0000 (0:00:01.287) 0:08:15.508 ******** 2026-03-28 02:17:54.932222 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:17:54.932229 | orchestrator | 2026-03-28 02:17:54.932236 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 02:17:54.932243 | orchestrator | Saturday 28 March 2026 02:17:54 +0000 (0:00:01.034) 0:08:16.542 ******** 2026-03-28 02:17:54.932251 | orchestrator | ok: [testbed-manager] 2026-03-28 02:17:54.932258 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:17:54.932265 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:17:54.932272 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:17:54.932279 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:17:54.932286 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:17:54.932293 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:17:54.932300 | orchestrator | 2026-03-28 02:17:54.932316 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 02:17:56.568608 | orchestrator | Saturday 28 March 2026 02:17:54 +0000 (0:00:00.871) 0:08:17.413 ******** 2026-03-28 02:17:56.568751 | orchestrator | changed: [testbed-manager] 2026-03-28 02:17:56.568761 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:17:56.568768 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:17:56.568778 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:17:56.568788 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:17:56.568802 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:17:56.568816 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:17:56.568854 | orchestrator | 2026-03-28 02:17:56.568866 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:17:56.568876 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 02:17:56.568887 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 02:17:56.568897 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 02:17:56.568907 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 02:17:56.568916 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-28 02:17:56.568926 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 02:17:56.568936 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 02:17:56.568945 | orchestrator | 2026-03-28 02:17:56.568954 | orchestrator | 2026-03-28 02:17:56.568963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:17:56.568973 | orchestrator | Saturday 28 March 2026 02:17:56 +0000 (0:00:01.097) 0:08:18.511 ******** 2026-03-28 02:17:56.568982 | orchestrator | =============================================================================== 2026-03-28 02:17:56.568992 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.77s 2026-03-28 02:17:56.569002 | orchestrator | osism.commons.packages : Download required packages -------------------- 43.96s 2026-03-28 02:17:56.569012 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.75s 2026-03-28 02:17:56.569022 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.10s 2026-03-28 02:17:56.569031 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.34s 2026-03-28 02:17:56.569059 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.44s 2026-03-28 02:17:56.569069 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.31s 2026-03-28 02:17:56.569080 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.94s 2026-03-28 02:17:56.569090 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.70s 2026-03-28 02:17:56.569101 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.31s 2026-03-28 02:17:56.569111 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.71s 2026-03-28 02:17:56.569121 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.43s 2026-03-28 02:17:56.569132 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.03s 2026-03-28 02:17:56.569139 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.79s 2026-03-28 02:17:56.569145 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.55s 2026-03-28 02:17:56.569151 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.55s 2026-03-28 02:17:56.569158 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.58s 2026-03-28 02:17:56.569165 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.22s 2026-03-28 02:17:56.569172 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.70s 2026-03-28 02:17:56.569180 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.68s 2026-03-28 02:17:56.925145 | orchestrator | + osism apply fail2ban 2026-03-28 02:18:09.949159 | orchestrator | 2026-03-28 02:18:09 | INFO  | Task 67067bc4-5089-4285-8619-e4e4f0ccb4fd (fail2ban) was prepared for execution. 2026-03-28 02:18:09.949263 | orchestrator | 2026-03-28 02:18:09 | INFO  | It takes a moment until task 67067bc4-5089-4285-8619-e4e4f0ccb4fd (fail2ban) has been started and output is visible here. 2026-03-28 02:18:33.505471 | orchestrator | 2026-03-28 02:18:33.505610 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-28 02:18:33.505637 | orchestrator | 2026-03-28 02:18:33.505656 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-28 02:18:33.505749 | orchestrator | Saturday 28 March 2026 02:18:14 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-03-28 02:18:33.505775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:18:33.505794 | orchestrator | 2026-03-28 02:18:33.505811 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-28 02:18:33.505829 | orchestrator | Saturday 28 March 2026 02:18:15 +0000 (0:00:01.160) 0:00:01.446 ******** 2026-03-28 02:18:33.505846 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:18:33.505863 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:18:33.505873 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:18:33.505883 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:18:33.505893 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:18:33.505903 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:18:33.505913 | orchestrator | changed: [testbed-manager] 2026-03-28 02:18:33.505923 | orchestrator | 2026-03-28 02:18:33.505933 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-28 02:18:33.505943 | orchestrator | Saturday 28 March 2026 02:18:28 +0000 (0:00:12.598) 0:00:14.045 ******** 2026-03-28 02:18:33.505953 | orchestrator | changed: [testbed-manager] 2026-03-28 02:18:33.505963 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:18:33.505973 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:18:33.505982 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:18:33.505992 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:18:33.506002 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:18:33.506011 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:18:33.506084 | orchestrator | 2026-03-28 02:18:33.506096 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-28 02:18:33.506108 | orchestrator | Saturday 28 March 2026 02:18:29 +0000 (0:00:01.422) 0:00:15.467 ******** 2026-03-28 02:18:33.506119 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:18:33.506131 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:18:33.506142 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:18:33.506153 | orchestrator | ok: [testbed-manager] 2026-03-28 02:18:33.506164 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:18:33.506176 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:18:33.506187 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:18:33.506198 | orchestrator | 2026-03-28 02:18:33.506209 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-28 02:18:33.506220 | orchestrator | Saturday 28 March 2026 02:18:31 +0000 (0:00:01.470) 0:00:16.938 ******** 2026-03-28 02:18:33.506231 | orchestrator | changed: [testbed-manager] 2026-03-28 02:18:33.506242 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:18:33.506252 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:18:33.506263 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:18:33.506274 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:18:33.506285 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:18:33.506296 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:18:33.506308 | orchestrator | 2026-03-28 02:18:33.506319 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:18:33.506330 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506370 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506383 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506400 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506417 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506433 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506449 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:18:33.506465 | orchestrator | 2026-03-28 02:18:33.506480 | orchestrator | 2026-03-28 02:18:33.506497 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:18:33.506514 | orchestrator | Saturday 28 March 2026 02:18:33 +0000 (0:00:01.701) 0:00:18.639 ******** 2026-03-28 02:18:33.506532 | orchestrator | =============================================================================== 2026-03-28 02:18:33.506549 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.60s 2026-03-28 02:18:33.506565 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.70s 2026-03-28 02:18:33.506580 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-03-28 02:18:33.506590 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-03-28 02:18:33.506602 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-03-28 02:18:33.839659 | orchestrator | + osism apply network 2026-03-28 02:18:45.976425 | orchestrator | 2026-03-28 02:18:45 | INFO  | Task 2c2b13b4-c1f0-4c74-8ac0-78cc240fba06 (network) was prepared for execution. 2026-03-28 02:18:45.976536 | orchestrator | 2026-03-28 02:18:45 | INFO  | It takes a moment until task 2c2b13b4-c1f0-4c74-8ac0-78cc240fba06 (network) has been started and output is visible here. 2026-03-28 02:19:15.736389 | orchestrator | 2026-03-28 02:19:15.736531 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-28 02:19:15.736561 | orchestrator | 2026-03-28 02:19:15.736582 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-28 02:19:15.736603 | orchestrator | Saturday 28 March 2026 02:18:50 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-03-28 02:19:15.736624 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.736645 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.736666 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.736687 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.736707 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.736845 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.736872 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.736893 | orchestrator | 2026-03-28 02:19:15.736908 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-28 02:19:15.736921 | orchestrator | Saturday 28 March 2026 02:18:51 +0000 (0:00:00.800) 0:00:01.059 ******** 2026-03-28 02:19:15.736936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:19:15.736952 | orchestrator | 2026-03-28 02:19:15.736965 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-28 02:19:15.736978 | orchestrator | Saturday 28 March 2026 02:18:52 +0000 (0:00:01.359) 0:00:02.419 ******** 2026-03-28 02:19:15.737018 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.737039 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.737054 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.737066 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.737078 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.737093 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.737113 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.737125 | orchestrator | 2026-03-28 02:19:15.737138 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-28 02:19:15.737150 | orchestrator | Saturday 28 March 2026 02:18:54 +0000 (0:00:02.092) 0:00:04.511 ******** 2026-03-28 02:19:15.737162 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.737174 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.737192 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.737212 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.737226 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.737238 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.737250 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.737261 | orchestrator | 2026-03-28 02:19:15.737272 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-28 02:19:15.737283 | orchestrator | Saturday 28 March 2026 02:18:56 +0000 (0:00:01.768) 0:00:06.279 ******** 2026-03-28 02:19:15.737294 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-28 02:19:15.737305 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-28 02:19:15.737316 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-28 02:19:15.737327 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-28 02:19:15.737338 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-28 02:19:15.737348 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-28 02:19:15.737359 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-28 02:19:15.737369 | orchestrator | 2026-03-28 02:19:15.737397 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-28 02:19:15.737409 | orchestrator | Saturday 28 March 2026 02:18:57 +0000 (0:00:00.992) 0:00:07.272 ******** 2026-03-28 02:19:15.737424 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:19:15.737436 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 02:19:15.737447 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 02:19:15.737457 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 02:19:15.737468 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 02:19:15.737478 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 02:19:15.737489 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 02:19:15.737502 | orchestrator | 2026-03-28 02:19:15.737520 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-28 02:19:15.737540 | orchestrator | Saturday 28 March 2026 02:19:00 +0000 (0:00:03.486) 0:00:10.758 ******** 2026-03-28 02:19:15.737558 | orchestrator | changed: [testbed-manager] 2026-03-28 02:19:15.737576 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:19:15.737591 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:19:15.737608 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:19:15.737626 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:19:15.737644 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:19:15.737663 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:19:15.737682 | orchestrator | 2026-03-28 02:19:15.737700 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-28 02:19:15.737718 | orchestrator | Saturday 28 March 2026 02:19:02 +0000 (0:00:01.606) 0:00:12.364 ******** 2026-03-28 02:19:15.737764 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 02:19:15.737776 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:19:15.737786 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 02:19:15.737797 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 02:19:15.737808 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 02:19:15.737830 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 02:19:15.737841 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 02:19:15.737937 | orchestrator | 2026-03-28 02:19:15.737967 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-28 02:19:15.737979 | orchestrator | Saturday 28 March 2026 02:19:04 +0000 (0:00:01.885) 0:00:14.250 ******** 2026-03-28 02:19:15.737990 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.738011 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.738081 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.738092 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.738103 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.738149 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.738161 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.738172 | orchestrator | 2026-03-28 02:19:15.738183 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-28 02:19:15.738216 | orchestrator | Saturday 28 March 2026 02:19:05 +0000 (0:00:01.217) 0:00:15.467 ******** 2026-03-28 02:19:15.738228 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:19:15.738239 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:15.738250 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:15.738260 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:15.738271 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:15.738281 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:15.738292 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:15.738302 | orchestrator | 2026-03-28 02:19:15.738313 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-28 02:19:15.738324 | orchestrator | Saturday 28 March 2026 02:19:06 +0000 (0:00:00.695) 0:00:16.163 ******** 2026-03-28 02:19:15.738335 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.738345 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.738356 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.738367 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.738377 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.738388 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.738398 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.738409 | orchestrator | 2026-03-28 02:19:15.738420 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-28 02:19:15.738431 | orchestrator | Saturday 28 March 2026 02:19:08 +0000 (0:00:02.267) 0:00:18.431 ******** 2026-03-28 02:19:15.738447 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:15.738467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:15.738487 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:15.738508 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:15.738524 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:15.738535 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:15.738547 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-28 02:19:15.738559 | orchestrator | 2026-03-28 02:19:15.738570 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-28 02:19:15.738581 | orchestrator | Saturday 28 March 2026 02:19:09 +0000 (0:00:00.947) 0:00:19.378 ******** 2026-03-28 02:19:15.738592 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.738603 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:19:15.738613 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:19:15.738624 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:19:15.738634 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:19:15.738652 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:19:15.738670 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:19:15.738688 | orchestrator | 2026-03-28 02:19:15.738707 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-28 02:19:15.738757 | orchestrator | Saturday 28 March 2026 02:19:11 +0000 (0:00:01.686) 0:00:21.064 ******** 2026-03-28 02:19:15.738778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:19:15.738810 | orchestrator | 2026-03-28 02:19:15.738822 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 02:19:15.738833 | orchestrator | Saturday 28 March 2026 02:19:12 +0000 (0:00:01.329) 0:00:22.394 ******** 2026-03-28 02:19:15.738843 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.738854 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.738865 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.738876 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.738886 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.738904 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.738916 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.738926 | orchestrator | 2026-03-28 02:19:15.738937 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-28 02:19:15.738948 | orchestrator | Saturday 28 March 2026 02:19:13 +0000 (0:00:01.180) 0:00:23.574 ******** 2026-03-28 02:19:15.738959 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:15.738969 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:15.738980 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:15.738990 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:15.739001 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:15.739011 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:15.739022 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:15.739032 | orchestrator | 2026-03-28 02:19:15.739043 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 02:19:15.739054 | orchestrator | Saturday 28 March 2026 02:19:14 +0000 (0:00:00.749) 0:00:24.324 ******** 2026-03-28 02:19:15.739065 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739076 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739087 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739098 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739109 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739119 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739131 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739148 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739166 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739185 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739204 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739215 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 02:19:15.739226 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739237 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 02:19:15.739247 | orchestrator | 2026-03-28 02:19:15.739268 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-28 02:19:34.692177 | orchestrator | Saturday 28 March 2026 02:19:15 +0000 (0:00:01.253) 0:00:25.578 ******** 2026-03-28 02:19:34.692280 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:19:34.692295 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:34.692304 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:34.692313 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:34.692321 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:34.692326 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:34.692331 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:34.692337 | orchestrator | 2026-03-28 02:19:34.692343 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-28 02:19:34.692366 | orchestrator | Saturday 28 March 2026 02:19:16 +0000 (0:00:00.702) 0:00:26.280 ******** 2026-03-28 02:19:34.692374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-03-28 02:19:34.692381 | orchestrator | 2026-03-28 02:19:34.692387 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-28 02:19:34.692392 | orchestrator | Saturday 28 March 2026 02:19:21 +0000 (0:00:04.986) 0:00:31.267 ******** 2026-03-28 02:19:34.692399 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692416 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692509 | orchestrator | 2026-03-28 02:19:34.692514 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-28 02:19:34.692520 | orchestrator | Saturday 28 March 2026 02:19:28 +0000 (0:00:06.653) 0:00:37.920 ******** 2026-03-28 02:19:34.692525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692530 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692554 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-28 02:19:34.692580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:34.692598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:41.295179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-28 02:19:41.295286 | orchestrator | 2026-03-28 02:19:41.295304 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-28 02:19:41.295319 | orchestrator | Saturday 28 March 2026 02:19:34 +0000 (0:00:06.614) 0:00:44.534 ******** 2026-03-28 02:19:41.295334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:19:41.295347 | orchestrator | 2026-03-28 02:19:41.295359 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 02:19:41.295371 | orchestrator | Saturday 28 March 2026 02:19:35 +0000 (0:00:01.334) 0:00:45.869 ******** 2026-03-28 02:19:41.295383 | orchestrator | ok: [testbed-manager] 2026-03-28 02:19:41.295397 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:19:41.295408 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:19:41.295420 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:19:41.295432 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:19:41.295443 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:19:41.295455 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:19:41.295467 | orchestrator | 2026-03-28 02:19:41.295479 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 02:19:41.295490 | orchestrator | Saturday 28 March 2026 02:19:37 +0000 (0:00:01.211) 0:00:47.080 ******** 2026-03-28 02:19:41.295502 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295515 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295527 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295539 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295551 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295563 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295575 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295586 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295598 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:19:41.295611 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295624 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295650 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295662 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295674 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:41.295686 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295722 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295735 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295747 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:41.295804 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295817 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295830 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295842 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295855 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:41.295867 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295880 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295892 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295904 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295916 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:41.295929 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:41.295941 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 02:19:41.295954 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 02:19:41.295966 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 02:19:41.295979 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 02:19:41.295991 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:41.296004 | orchestrator | 2026-03-28 02:19:41.296016 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-28 02:19:41.296045 | orchestrator | Saturday 28 March 2026 02:19:39 +0000 (0:00:02.121) 0:00:49.202 ******** 2026-03-28 02:19:41.296058 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:19:41.296070 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:41.296082 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:41.296093 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:41.296105 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:41.296117 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:41.296128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:41.296140 | orchestrator | 2026-03-28 02:19:41.296151 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-28 02:19:41.296163 | orchestrator | Saturday 28 March 2026 02:19:40 +0000 (0:00:00.739) 0:00:49.942 ******** 2026-03-28 02:19:41.296175 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:19:41.296186 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:19:41.296198 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:19:41.296210 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:19:41.296221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:19:41.296235 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:19:41.296247 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:19:41.296261 | orchestrator | 2026-03-28 02:19:41.296273 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:19:41.296286 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 02:19:41.296299 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296321 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296333 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296345 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296358 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296370 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 02:19:41.296381 | orchestrator | 2026-03-28 02:19:41.296394 | orchestrator | 2026-03-28 02:19:41.296406 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:19:41.296418 | orchestrator | Saturday 28 March 2026 02:19:40 +0000 (0:00:00.786) 0:00:50.728 ******** 2026-03-28 02:19:41.296430 | orchestrator | =============================================================================== 2026-03-28 02:19:41.296448 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.65s 2026-03-28 02:19:41.296460 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.61s 2026-03-28 02:19:41.296471 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.99s 2026-03-28 02:19:41.296483 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2026-03-28 02:19:41.296495 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.27s 2026-03-28 02:19:41.296507 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.12s 2026-03-28 02:19:41.296519 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2026-03-28 02:19:41.296530 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2026-03-28 02:19:41.296542 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-03-28 02:19:41.296554 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2026-03-28 02:19:41.296566 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.61s 2026-03-28 02:19:41.296577 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.36s 2026-03-28 02:19:41.296589 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2026-03-28 02:19:41.296601 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2026-03-28 02:19:41.296612 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2026-03-28 02:19:41.296624 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2026-03-28 02:19:41.296636 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-03-28 02:19:41.296648 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-03-28 02:19:41.296661 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2026-03-28 02:19:41.296673 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-03-28 02:19:41.649422 | orchestrator | + osism apply wireguard 2026-03-28 02:19:53.761011 | orchestrator | 2026-03-28 02:19:53 | INFO  | Task 0a28b579-94ad-4760-a689-6b618a87c9dc (wireguard) was prepared for execution. 2026-03-28 02:19:53.761120 | orchestrator | 2026-03-28 02:19:53 | INFO  | It takes a moment until task 0a28b579-94ad-4760-a689-6b618a87c9dc (wireguard) has been started and output is visible here. 2026-03-28 02:20:15.553238 | orchestrator | 2026-03-28 02:20:15.553352 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-28 02:20:15.553396 | orchestrator | 2026-03-28 02:20:15.553409 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-28 02:20:15.553421 | orchestrator | Saturday 28 March 2026 02:19:58 +0000 (0:00:00.228) 0:00:00.228 ******** 2026-03-28 02:20:15.553432 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:15.553443 | orchestrator | 2026-03-28 02:20:15.553454 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-28 02:20:15.553465 | orchestrator | Saturday 28 March 2026 02:20:00 +0000 (0:00:01.640) 0:00:01.869 ******** 2026-03-28 02:20:15.553476 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553488 | orchestrator | 2026-03-28 02:20:15.553504 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-28 02:20:15.553515 | orchestrator | Saturday 28 March 2026 02:20:07 +0000 (0:00:07.309) 0:00:09.178 ******** 2026-03-28 02:20:15.553526 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553537 | orchestrator | 2026-03-28 02:20:15.553548 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-28 02:20:15.553559 | orchestrator | Saturday 28 March 2026 02:20:08 +0000 (0:00:00.560) 0:00:09.739 ******** 2026-03-28 02:20:15.553569 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553580 | orchestrator | 2026-03-28 02:20:15.553591 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-28 02:20:15.553602 | orchestrator | Saturday 28 March 2026 02:20:08 +0000 (0:00:00.442) 0:00:10.181 ******** 2026-03-28 02:20:15.553612 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:15.553623 | orchestrator | 2026-03-28 02:20:15.553634 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-28 02:20:15.553645 | orchestrator | Saturday 28 March 2026 02:20:09 +0000 (0:00:00.758) 0:00:10.939 ******** 2026-03-28 02:20:15.553655 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:15.553666 | orchestrator | 2026-03-28 02:20:15.553677 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-28 02:20:15.553688 | orchestrator | Saturday 28 March 2026 02:20:09 +0000 (0:00:00.477) 0:00:11.416 ******** 2026-03-28 02:20:15.553698 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:15.553709 | orchestrator | 2026-03-28 02:20:15.553720 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-28 02:20:15.553731 | orchestrator | Saturday 28 March 2026 02:20:10 +0000 (0:00:00.427) 0:00:11.844 ******** 2026-03-28 02:20:15.553742 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553752 | orchestrator | 2026-03-28 02:20:15.553763 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-28 02:20:15.553774 | orchestrator | Saturday 28 March 2026 02:20:11 +0000 (0:00:01.341) 0:00:13.185 ******** 2026-03-28 02:20:15.553785 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 02:20:15.553830 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553843 | orchestrator | 2026-03-28 02:20:15.553855 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-28 02:20:15.553868 | orchestrator | Saturday 28 March 2026 02:20:12 +0000 (0:00:00.968) 0:00:14.153 ******** 2026-03-28 02:20:15.553880 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553891 | orchestrator | 2026-03-28 02:20:15.553905 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-28 02:20:15.553918 | orchestrator | Saturday 28 March 2026 02:20:14 +0000 (0:00:01.729) 0:00:15.883 ******** 2026-03-28 02:20:15.553930 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:15.553943 | orchestrator | 2026-03-28 02:20:15.553955 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:20:15.553967 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:20:15.553980 | orchestrator | 2026-03-28 02:20:15.553993 | orchestrator | 2026-03-28 02:20:15.554005 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:20:15.554107 | orchestrator | Saturday 28 March 2026 02:20:15 +0000 (0:00:00.952) 0:00:16.836 ******** 2026-03-28 02:20:15.554133 | orchestrator | =============================================================================== 2026-03-28 02:20:15.554147 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.31s 2026-03-28 02:20:15.554160 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-03-28 02:20:15.554172 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.64s 2026-03-28 02:20:15.554183 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.34s 2026-03-28 02:20:15.554193 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-03-28 02:20:15.554204 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-03-28 02:20:15.554215 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.76s 2026-03-28 02:20:15.554226 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-03-28 02:20:15.554236 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.48s 2026-03-28 02:20:15.554247 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-28 02:20:15.554258 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-03-28 02:20:15.886099 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-28 02:20:15.924692 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-28 02:20:15.924789 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-28 02:20:16.012451 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 159 0 --:--:-- --:--:-- --:--:-- 159 100 14 100 14 0 0 158 0 --:--:-- --:--:-- --:--:-- 159 2026-03-28 02:20:16.029854 | orchestrator | + osism apply --environment custom workarounds 2026-03-28 02:20:18.065686 | orchestrator | 2026-03-28 02:20:18 | INFO  | Trying to run play workarounds in environment custom 2026-03-28 02:20:28.195896 | orchestrator | 2026-03-28 02:20:28 | INFO  | Task 9bee9e09-37bd-4696-b900-d48c38eacdc0 (workarounds) was prepared for execution. 2026-03-28 02:20:28.196004 | orchestrator | 2026-03-28 02:20:28 | INFO  | It takes a moment until task 9bee9e09-37bd-4696-b900-d48c38eacdc0 (workarounds) has been started and output is visible here. 2026-03-28 02:20:53.779970 | orchestrator | 2026-03-28 02:20:53.780077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:20:53.780092 | orchestrator | 2026-03-28 02:20:53.780102 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-28 02:20:53.780111 | orchestrator | Saturday 28 March 2026 02:20:32 +0000 (0:00:00.129) 0:00:00.129 ******** 2026-03-28 02:20:53.780120 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780129 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780138 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780147 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780155 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780164 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780172 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-28 02:20:53.780181 | orchestrator | 2026-03-28 02:20:53.780190 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-28 02:20:53.780198 | orchestrator | 2026-03-28 02:20:53.780207 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 02:20:53.780216 | orchestrator | Saturday 28 March 2026 02:20:33 +0000 (0:00:00.822) 0:00:00.951 ******** 2026-03-28 02:20:53.780225 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:53.780257 | orchestrator | 2026-03-28 02:20:53.780267 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-28 02:20:53.780275 | orchestrator | 2026-03-28 02:20:53.780284 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 02:20:53.780293 | orchestrator | Saturday 28 March 2026 02:20:35 +0000 (0:00:02.558) 0:00:03.510 ******** 2026-03-28 02:20:53.780302 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:20:53.780310 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:20:53.780319 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:20:53.780327 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:20:53.780335 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:20:53.780344 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:20:53.780352 | orchestrator | 2026-03-28 02:20:53.780361 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-28 02:20:53.780369 | orchestrator | 2026-03-28 02:20:53.780402 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-28 02:20:53.780411 | orchestrator | Saturday 28 March 2026 02:20:37 +0000 (0:00:01.834) 0:00:05.345 ******** 2026-03-28 02:20:53.780420 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780431 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780439 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780448 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780456 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780465 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 02:20:53.780473 | orchestrator | 2026-03-28 02:20:53.780482 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-28 02:20:53.780491 | orchestrator | Saturday 28 March 2026 02:20:39 +0000 (0:00:01.485) 0:00:06.831 ******** 2026-03-28 02:20:53.780500 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:20:53.780511 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:20:53.780520 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:20:53.780529 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:20:53.780539 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:20:53.780548 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:20:53.780557 | orchestrator | 2026-03-28 02:20:53.780567 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-28 02:20:53.780577 | orchestrator | Saturday 28 March 2026 02:20:42 +0000 (0:00:03.655) 0:00:10.486 ******** 2026-03-28 02:20:53.780587 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:20:53.780597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:20:53.780607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:20:53.780616 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:20:53.780625 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:20:53.780635 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:20:53.780645 | orchestrator | 2026-03-28 02:20:53.780654 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-28 02:20:53.780664 | orchestrator | 2026-03-28 02:20:53.780673 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-28 02:20:53.780683 | orchestrator | Saturday 28 March 2026 02:20:43 +0000 (0:00:00.749) 0:00:11.235 ******** 2026-03-28 02:20:53.780693 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:20:53.780702 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:20:53.780712 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:20:53.780722 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:20:53.780732 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:20:53.780742 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:20:53.780758 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:53.780768 | orchestrator | 2026-03-28 02:20:53.780778 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-28 02:20:53.780788 | orchestrator | Saturday 28 March 2026 02:20:45 +0000 (0:00:01.589) 0:00:12.825 ******** 2026-03-28 02:20:53.780797 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:20:53.780807 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:20:53.780817 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:20:53.780827 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:20:53.780889 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:20:53.780900 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:20:53.780924 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:53.780933 | orchestrator | 2026-03-28 02:20:53.780942 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-28 02:20:53.780953 | orchestrator | Saturday 28 March 2026 02:20:46 +0000 (0:00:01.665) 0:00:14.491 ******** 2026-03-28 02:20:53.780968 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:20:53.780982 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:20:53.780995 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:20:53.781009 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:20:53.781023 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:20:53.781037 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:20:53.781050 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:53.781061 | orchestrator | 2026-03-28 02:20:53.781074 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-28 02:20:53.781088 | orchestrator | Saturday 28 March 2026 02:20:48 +0000 (0:00:01.597) 0:00:16.088 ******** 2026-03-28 02:20:53.781101 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:20:53.781115 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:20:53.781130 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:20:53.781144 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:20:53.781158 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:20:53.781173 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:20:53.781186 | orchestrator | changed: [testbed-manager] 2026-03-28 02:20:53.781200 | orchestrator | 2026-03-28 02:20:53.781215 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-28 02:20:53.781230 | orchestrator | Saturday 28 March 2026 02:20:50 +0000 (0:00:01.808) 0:00:17.896 ******** 2026-03-28 02:20:53.781245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:20:53.781259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:20:53.781273 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:20:53.781289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:20:53.781299 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:20:53.781307 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:20:53.781316 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:20:53.781324 | orchestrator | 2026-03-28 02:20:53.781333 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-28 02:20:53.781341 | orchestrator | 2026-03-28 02:20:53.781350 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-28 02:20:53.781359 | orchestrator | Saturday 28 March 2026 02:20:50 +0000 (0:00:00.678) 0:00:18.575 ******** 2026-03-28 02:20:53.781367 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:20:53.781376 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:20:53.781385 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:20:53.781394 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:20:53.781418 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:20:53.781433 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:20:53.781445 | orchestrator | ok: [testbed-manager] 2026-03-28 02:20:53.781454 | orchestrator | 2026-03-28 02:20:53.781462 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:20:53.781472 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:20:53.781482 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781499 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781508 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781516 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781525 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781533 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:20:53.781542 | orchestrator | 2026-03-28 02:20:53.781550 | orchestrator | 2026-03-28 02:20:53.781559 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:20:53.781568 | orchestrator | Saturday 28 March 2026 02:20:53 +0000 (0:00:02.839) 0:00:21.414 ******** 2026-03-28 02:20:53.781576 | orchestrator | =============================================================================== 2026-03-28 02:20:53.781585 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.66s 2026-03-28 02:20:53.781594 | orchestrator | Install python3-docker -------------------------------------------------- 2.84s 2026-03-28 02:20:53.781602 | orchestrator | Apply netplan configuration --------------------------------------------- 2.56s 2026-03-28 02:20:53.781611 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-03-28 02:20:53.781620 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2026-03-28 02:20:53.781628 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2026-03-28 02:20:53.781637 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-03-28 02:20:53.781645 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2026-03-28 02:20:53.781654 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-03-28 02:20:53.781662 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2026-03-28 02:20:53.781671 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2026-03-28 02:20:53.781689 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-03-28 02:20:54.476180 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-28 02:21:06.616361 | orchestrator | 2026-03-28 02:21:06 | INFO  | Task 2bd5bbbc-38fb-4889-9599-2adabd092b2f (reboot) was prepared for execution. 2026-03-28 02:21:06.616499 | orchestrator | 2026-03-28 02:21:06 | INFO  | It takes a moment until task 2bd5bbbc-38fb-4889-9599-2adabd092b2f (reboot) has been started and output is visible here. 2026-03-28 02:21:16.974842 | orchestrator | 2026-03-28 02:21:16.975041 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975057 | orchestrator | 2026-03-28 02:21:16.975066 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975075 | orchestrator | Saturday 28 March 2026 02:21:10 +0000 (0:00:00.206) 0:00:00.206 ******** 2026-03-28 02:21:16.975085 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:21:16.975095 | orchestrator | 2026-03-28 02:21:16.975104 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975112 | orchestrator | Saturday 28 March 2026 02:21:10 +0000 (0:00:00.109) 0:00:00.316 ******** 2026-03-28 02:21:16.975121 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:21:16.975130 | orchestrator | 2026-03-28 02:21:16.975139 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975170 | orchestrator | Saturday 28 March 2026 02:21:11 +0000 (0:00:00.923) 0:00:01.240 ******** 2026-03-28 02:21:16.975179 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:21:16.975188 | orchestrator | 2026-03-28 02:21:16.975196 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975205 | orchestrator | 2026-03-28 02:21:16.975213 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975222 | orchestrator | Saturday 28 March 2026 02:21:11 +0000 (0:00:00.128) 0:00:01.368 ******** 2026-03-28 02:21:16.975231 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:21:16.975239 | orchestrator | 2026-03-28 02:21:16.975248 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975256 | orchestrator | Saturday 28 March 2026 02:21:12 +0000 (0:00:00.104) 0:00:01.473 ******** 2026-03-28 02:21:16.975265 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:21:16.975273 | orchestrator | 2026-03-28 02:21:16.975295 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975304 | orchestrator | Saturday 28 March 2026 02:21:12 +0000 (0:00:00.667) 0:00:02.141 ******** 2026-03-28 02:21:16.975312 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:21:16.975321 | orchestrator | 2026-03-28 02:21:16.975329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975338 | orchestrator | 2026-03-28 02:21:16.975346 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975355 | orchestrator | Saturday 28 March 2026 02:21:12 +0000 (0:00:00.119) 0:00:02.261 ******** 2026-03-28 02:21:16.975364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:21:16.975372 | orchestrator | 2026-03-28 02:21:16.975381 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975389 | orchestrator | Saturday 28 March 2026 02:21:13 +0000 (0:00:00.230) 0:00:02.492 ******** 2026-03-28 02:21:16.975398 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:21:16.975408 | orchestrator | 2026-03-28 02:21:16.975419 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975429 | orchestrator | Saturday 28 March 2026 02:21:13 +0000 (0:00:00.718) 0:00:03.210 ******** 2026-03-28 02:21:16.975439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:21:16.975449 | orchestrator | 2026-03-28 02:21:16.975458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975468 | orchestrator | 2026-03-28 02:21:16.975478 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975487 | orchestrator | Saturday 28 March 2026 02:21:13 +0000 (0:00:00.129) 0:00:03.340 ******** 2026-03-28 02:21:16.975497 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:21:16.975507 | orchestrator | 2026-03-28 02:21:16.975516 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975526 | orchestrator | Saturday 28 March 2026 02:21:14 +0000 (0:00:00.128) 0:00:03.469 ******** 2026-03-28 02:21:16.975536 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:21:16.975545 | orchestrator | 2026-03-28 02:21:16.975555 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975565 | orchestrator | Saturday 28 March 2026 02:21:14 +0000 (0:00:00.695) 0:00:04.164 ******** 2026-03-28 02:21:16.975575 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:21:16.975585 | orchestrator | 2026-03-28 02:21:16.975595 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975604 | orchestrator | 2026-03-28 02:21:16.975613 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975623 | orchestrator | Saturday 28 March 2026 02:21:14 +0000 (0:00:00.137) 0:00:04.302 ******** 2026-03-28 02:21:16.975633 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:21:16.975643 | orchestrator | 2026-03-28 02:21:16.975652 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975668 | orchestrator | Saturday 28 March 2026 02:21:15 +0000 (0:00:00.111) 0:00:04.413 ******** 2026-03-28 02:21:16.975678 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:21:16.975688 | orchestrator | 2026-03-28 02:21:16.975698 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975708 | orchestrator | Saturday 28 March 2026 02:21:15 +0000 (0:00:00.649) 0:00:05.062 ******** 2026-03-28 02:21:16.975717 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:21:16.975726 | orchestrator | 2026-03-28 02:21:16.975735 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 02:21:16.975743 | orchestrator | 2026-03-28 02:21:16.975752 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 02:21:16.975760 | orchestrator | Saturday 28 March 2026 02:21:15 +0000 (0:00:00.110) 0:00:05.173 ******** 2026-03-28 02:21:16.975769 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:21:16.975777 | orchestrator | 2026-03-28 02:21:16.975786 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 02:21:16.975794 | orchestrator | Saturday 28 March 2026 02:21:15 +0000 (0:00:00.100) 0:00:05.274 ******** 2026-03-28 02:21:16.975803 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:21:16.975811 | orchestrator | 2026-03-28 02:21:16.975820 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 02:21:16.975828 | orchestrator | Saturday 28 March 2026 02:21:16 +0000 (0:00:00.689) 0:00:05.963 ******** 2026-03-28 02:21:16.975852 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:21:16.975882 | orchestrator | 2026-03-28 02:21:16.975896 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:21:16.975913 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975929 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975944 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975958 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975967 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975976 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:21:16.975984 | orchestrator | 2026-03-28 02:21:16.975993 | orchestrator | 2026-03-28 02:21:16.976001 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:21:16.976015 | orchestrator | Saturday 28 March 2026 02:21:16 +0000 (0:00:00.041) 0:00:06.005 ******** 2026-03-28 02:21:16.976024 | orchestrator | =============================================================================== 2026-03-28 02:21:16.976032 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2026-03-28 02:21:16.976041 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-03-28 02:21:16.976049 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-03-28 02:21:17.305744 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-28 02:21:29.532566 | orchestrator | 2026-03-28 02:21:29 | INFO  | Task f6a200a2-fd04-4a48-955b-ed31878e9b24 (wait-for-connection) was prepared for execution. 2026-03-28 02:21:29.532711 | orchestrator | 2026-03-28 02:21:29 | INFO  | It takes a moment until task f6a200a2-fd04-4a48-955b-ed31878e9b24 (wait-for-connection) has been started and output is visible here. 2026-03-28 02:21:45.695864 | orchestrator | 2026-03-28 02:21:45.696066 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-28 02:21:45.696089 | orchestrator | 2026-03-28 02:21:45.696102 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-28 02:21:45.696114 | orchestrator | Saturday 28 March 2026 02:21:33 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-03-28 02:21:45.696126 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:21:45.696138 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:21:45.696149 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:21:45.696159 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:21:45.696170 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:21:45.696181 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:21:45.696192 | orchestrator | 2026-03-28 02:21:45.696203 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:21:45.696215 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696227 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696239 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696250 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696261 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696272 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:21:45.696283 | orchestrator | 2026-03-28 02:21:45.696294 | orchestrator | 2026-03-28 02:21:45.696305 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:21:45.696317 | orchestrator | Saturday 28 March 2026 02:21:45 +0000 (0:00:11.532) 0:00:11.762 ******** 2026-03-28 02:21:45.696327 | orchestrator | =============================================================================== 2026-03-28 02:21:45.696339 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.53s 2026-03-28 02:21:45.996065 | orchestrator | + osism apply hddtemp 2026-03-28 02:21:58.140712 | orchestrator | 2026-03-28 02:21:58 | INFO  | Task c9f7e162-b2b4-4cdd-b7b9-5d0ceab6e408 (hddtemp) was prepared for execution. 2026-03-28 02:21:58.140822 | orchestrator | 2026-03-28 02:21:58 | INFO  | It takes a moment until task c9f7e162-b2b4-4cdd-b7b9-5d0ceab6e408 (hddtemp) has been started and output is visible here. 2026-03-28 02:22:27.599357 | orchestrator | 2026-03-28 02:22:27.599514 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-28 02:22:27.599538 | orchestrator | 2026-03-28 02:22:27.599555 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-28 02:22:27.599571 | orchestrator | Saturday 28 March 2026 02:22:02 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-28 02:22:27.599587 | orchestrator | ok: [testbed-manager] 2026-03-28 02:22:27.599603 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:22:27.599619 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:22:27.599634 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:22:27.599649 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:22:27.599665 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:22:27.599681 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:22:27.599695 | orchestrator | 2026-03-28 02:22:27.599710 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-28 02:22:27.599726 | orchestrator | Saturday 28 March 2026 02:22:03 +0000 (0:00:00.746) 0:00:01.016 ******** 2026-03-28 02:22:27.599743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:22:27.599789 | orchestrator | 2026-03-28 02:22:27.599806 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-28 02:22:27.599821 | orchestrator | Saturday 28 March 2026 02:22:04 +0000 (0:00:01.205) 0:00:02.222 ******** 2026-03-28 02:22:27.599836 | orchestrator | ok: [testbed-manager] 2026-03-28 02:22:27.599851 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:22:27.599865 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:22:27.599880 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:22:27.599895 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:22:27.599911 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:22:27.599927 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:22:27.599966 | orchestrator | 2026-03-28 02:22:27.599996 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-28 02:22:27.600013 | orchestrator | Saturday 28 March 2026 02:22:06 +0000 (0:00:02.017) 0:00:04.239 ******** 2026-03-28 02:22:27.600028 | orchestrator | changed: [testbed-manager] 2026-03-28 02:22:27.600045 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:22:27.600060 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:22:27.600075 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:22:27.600090 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:22:27.600105 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:22:27.600119 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:22:27.600135 | orchestrator | 2026-03-28 02:22:27.600150 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-28 02:22:27.600165 | orchestrator | Saturday 28 March 2026 02:22:07 +0000 (0:00:01.209) 0:00:05.449 ******** 2026-03-28 02:22:27.600180 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:22:27.600195 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:22:27.600208 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:22:27.600223 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:22:27.600238 | orchestrator | ok: [testbed-manager] 2026-03-28 02:22:27.600253 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:22:27.600267 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:22:27.600281 | orchestrator | 2026-03-28 02:22:27.600296 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-28 02:22:27.600311 | orchestrator | Saturday 28 March 2026 02:22:09 +0000 (0:00:02.150) 0:00:07.599 ******** 2026-03-28 02:22:27.600326 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:22:27.600340 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:22:27.600355 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:22:27.600369 | orchestrator | changed: [testbed-manager] 2026-03-28 02:22:27.600384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:22:27.600399 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:22:27.600414 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:22:27.600429 | orchestrator | 2026-03-28 02:22:27.600444 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-28 02:22:27.600458 | orchestrator | Saturday 28 March 2026 02:22:10 +0000 (0:00:00.845) 0:00:08.445 ******** 2026-03-28 02:22:27.600473 | orchestrator | changed: [testbed-manager] 2026-03-28 02:22:27.600487 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:22:27.600502 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:22:27.600517 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:22:27.600531 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:22:27.600545 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:22:27.600559 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:22:27.600574 | orchestrator | 2026-03-28 02:22:27.600589 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-28 02:22:27.600604 | orchestrator | Saturday 28 March 2026 02:22:23 +0000 (0:00:13.374) 0:00:21.819 ******** 2026-03-28 02:22:27.600619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:22:27.600647 | orchestrator | 2026-03-28 02:22:27.600662 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-28 02:22:27.600676 | orchestrator | Saturday 28 March 2026 02:22:25 +0000 (0:00:01.233) 0:00:23.053 ******** 2026-03-28 02:22:27.600692 | orchestrator | changed: [testbed-manager] 2026-03-28 02:22:27.600707 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:22:27.600721 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:22:27.600736 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:22:27.600750 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:22:27.600765 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:22:27.600779 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:22:27.600794 | orchestrator | 2026-03-28 02:22:27.600809 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:22:27.600825 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:22:27.600862 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.600879 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.600892 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.600906 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.600922 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.600984 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:22:27.601001 | orchestrator | 2026-03-28 02:22:27.601016 | orchestrator | 2026-03-28 02:22:27.601030 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:22:27.601044 | orchestrator | Saturday 28 March 2026 02:22:27 +0000 (0:00:01.920) 0:00:24.974 ******** 2026-03-28 02:22:27.601059 | orchestrator | =============================================================================== 2026-03-28 02:22:27.601074 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.37s 2026-03-28 02:22:27.601090 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.15s 2026-03-28 02:22:27.601112 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2026-03-28 02:22:27.601126 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-03-28 02:22:27.601141 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.23s 2026-03-28 02:22:27.601156 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.21s 2026-03-28 02:22:27.601171 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2026-03-28 02:22:27.601186 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2026-03-28 02:22:27.601200 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-03-28 02:22:27.950422 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-28 02:22:28.003885 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 02:22:28.004026 | orchestrator | + sudo systemctl restart manager.service 2026-03-28 02:22:42.229519 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 02:22:42.229758 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 02:22:42.229777 | orchestrator | + local max_attempts=60 2026-03-28 02:22:42.229790 | orchestrator | + local name=ceph-ansible 2026-03-28 02:22:42.229801 | orchestrator | + local attempt_num=1 2026-03-28 02:22:42.229842 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:22:42.274525 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:22:42.274615 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:22:42.274629 | orchestrator | + sleep 5 2026-03-28 02:22:47.281281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:22:47.337839 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:22:47.338161 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:22:47.338190 | orchestrator | + sleep 5 2026-03-28 02:22:52.341695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:22:52.375164 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:22:52.375250 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:22:52.375263 | orchestrator | + sleep 5 2026-03-28 02:22:57.380583 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:22:57.417651 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:22:57.417754 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:22:57.417771 | orchestrator | + sleep 5 2026-03-28 02:23:02.422128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:02.456183 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:02.456264 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:02.456275 | orchestrator | + sleep 5 2026-03-28 02:23:07.461343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:07.491068 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:07.491139 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:07.491145 | orchestrator | + sleep 5 2026-03-28 02:23:12.496080 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:12.534269 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:12.534344 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:12.534352 | orchestrator | + sleep 5 2026-03-28 02:23:17.539676 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:17.586229 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:17.586332 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:17.586348 | orchestrator | + sleep 5 2026-03-28 02:23:22.589314 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:22.628491 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:22.628576 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:22.628584 | orchestrator | + sleep 5 2026-03-28 02:23:27.632856 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:27.682386 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:27.682523 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:27.682544 | orchestrator | + sleep 5 2026-03-28 02:23:32.686231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:32.727963 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:32.728072 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:32.728083 | orchestrator | + sleep 5 2026-03-28 02:23:37.733159 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:37.774832 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:37.774948 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:37.774970 | orchestrator | + sleep 5 2026-03-28 02:23:42.779201 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:42.817582 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:42.817681 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 02:23:42.817695 | orchestrator | + sleep 5 2026-03-28 02:23:47.823142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 02:23:47.855496 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:47.855616 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 02:23:47.855638 | orchestrator | + local max_attempts=60 2026-03-28 02:23:47.855654 | orchestrator | + local name=kolla-ansible 2026-03-28 02:23:47.855670 | orchestrator | + local attempt_num=1 2026-03-28 02:23:47.855681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 02:23:47.885227 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:47.885311 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 02:23:47.885347 | orchestrator | + local max_attempts=60 2026-03-28 02:23:47.885357 | orchestrator | + local name=osism-ansible 2026-03-28 02:23:47.885365 | orchestrator | + local attempt_num=1 2026-03-28 02:23:47.885374 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 02:23:47.920036 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 02:23:47.920114 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 02:23:47.920125 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 02:23:48.053213 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-28 02:23:48.176048 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-28 02:23:48.332782 | orchestrator | ARA in osism-ansible already disabled. 2026-03-28 02:23:48.498857 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-28 02:23:48.498956 | orchestrator | + osism apply gather-facts 2026-03-28 02:24:00.712404 | orchestrator | 2026-03-28 02:24:00 | INFO  | Task 1db56a4a-a7eb-464e-906c-1c77b5b75c35 (gather-facts) was prepared for execution. 2026-03-28 02:24:00.712502 | orchestrator | 2026-03-28 02:24:00 | INFO  | It takes a moment until task 1db56a4a-a7eb-464e-906c-1c77b5b75c35 (gather-facts) has been started and output is visible here. 2026-03-28 02:24:14.532614 | orchestrator | 2026-03-28 02:24:14.532732 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 02:24:14.532749 | orchestrator | 2026-03-28 02:24:14.532762 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:24:14.532774 | orchestrator | Saturday 28 March 2026 02:24:05 +0000 (0:00:00.221) 0:00:00.221 ******** 2026-03-28 02:24:14.532785 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:24:14.532798 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:24:14.532810 | orchestrator | ok: [testbed-manager] 2026-03-28 02:24:14.532821 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:24:14.532831 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:24:14.532842 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:24:14.532853 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:24:14.532864 | orchestrator | 2026-03-28 02:24:14.532874 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 02:24:14.532885 | orchestrator | 2026-03-28 02:24:14.532896 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 02:24:14.532907 | orchestrator | Saturday 28 March 2026 02:24:13 +0000 (0:00:08.503) 0:00:08.725 ******** 2026-03-28 02:24:14.532918 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:24:14.532930 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:24:14.532940 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:24:14.532951 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:24:14.532962 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:24:14.532973 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:24:14.532983 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:24:14.532994 | orchestrator | 2026-03-28 02:24:14.533005 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:24:14.533065 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533077 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533088 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533100 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533111 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533122 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533159 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 02:24:14.533173 | orchestrator | 2026-03-28 02:24:14.533185 | orchestrator | 2026-03-28 02:24:14.533197 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:24:14.533211 | orchestrator | Saturday 28 March 2026 02:24:14 +0000 (0:00:00.558) 0:00:09.284 ******** 2026-03-28 02:24:14.533223 | orchestrator | =============================================================================== 2026-03-28 02:24:14.533235 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.50s 2026-03-28 02:24:14.533248 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-03-28 02:24:14.878412 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-28 02:24:14.892914 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-28 02:24:14.914974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-28 02:24:14.935474 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-28 02:24:14.953568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-28 02:24:14.978633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-28 02:24:14.999440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-28 02:24:15.019966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-28 02:24:15.039729 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-28 02:24:15.057518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-28 02:24:15.075192 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-28 02:24:15.091963 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-28 02:24:15.103900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-28 02:24:15.117099 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-28 02:24:15.132133 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-28 02:24:15.144053 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-28 02:24:15.163870 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-28 02:24:15.183951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-28 02:24:15.214342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-28 02:24:15.232542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-28 02:24:15.249332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-28 02:24:15.274121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-28 02:24:15.287391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-28 02:24:15.309178 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 02:24:15.729381 | orchestrator | ok: Runtime: 0:24:51.481006 2026-03-28 02:24:15.834456 | 2026-03-28 02:24:15.834603 | TASK [Deploy services] 2026-03-28 02:24:16.557187 | orchestrator | 2026-03-28 02:24:16.557374 | orchestrator | # DEPLOY SERVICES 2026-03-28 02:24:16.557401 | orchestrator | 2026-03-28 02:24:16.557415 | orchestrator | + set -e 2026-03-28 02:24:16.557427 | orchestrator | + echo 2026-03-28 02:24:16.557439 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-28 02:24:16.557451 | orchestrator | + echo 2026-03-28 02:24:16.557492 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 02:24:16.557512 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 02:24:16.557526 | orchestrator | ++ INTERACTIVE=false 2026-03-28 02:24:16.557537 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 02:24:16.557557 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 02:24:16.557568 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 02:24:16.557581 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 02:24:16.557591 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 02:24:16.557607 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 02:24:16.557617 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 02:24:16.557630 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 02:24:16.557641 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 02:24:16.557654 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 02:24:16.557664 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 02:24:16.557674 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 02:24:16.557685 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 02:24:16.557695 | orchestrator | ++ export ARA=false 2026-03-28 02:24:16.557705 | orchestrator | ++ ARA=false 2026-03-28 02:24:16.557716 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 02:24:16.557725 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 02:24:16.557735 | orchestrator | ++ export TEMPEST=false 2026-03-28 02:24:16.557745 | orchestrator | ++ TEMPEST=false 2026-03-28 02:24:16.557754 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 02:24:16.557764 | orchestrator | ++ IS_ZUUL=true 2026-03-28 02:24:16.557774 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:24:16.557800 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:24:16.557818 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 02:24:16.557834 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 02:24:16.557850 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 02:24:16.557866 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 02:24:16.557882 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 02:24:16.557895 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 02:24:16.557909 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 02:24:16.557933 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 02:24:16.557951 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-28 02:24:16.567250 | orchestrator | + set -e 2026-03-28 02:24:16.568677 | orchestrator | 2026-03-28 02:24:16.568740 | orchestrator | # PULL IMAGES 2026-03-28 02:24:16.568754 | orchestrator | 2026-03-28 02:24:16.568766 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 02:24:16.568781 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 02:24:16.568794 | orchestrator | ++ INTERACTIVE=false 2026-03-28 02:24:16.568805 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 02:24:16.568816 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 02:24:16.568842 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 02:24:16.568863 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 02:24:16.568876 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 02:24:16.568887 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 02:24:16.568982 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 02:24:16.568995 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 02:24:16.569006 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 02:24:16.569035 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 02:24:16.569047 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 02:24:16.569059 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 02:24:16.569071 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 02:24:16.569082 | orchestrator | ++ export ARA=false 2026-03-28 02:24:16.569093 | orchestrator | ++ ARA=false 2026-03-28 02:24:16.569108 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 02:24:16.569120 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 02:24:16.569131 | orchestrator | ++ export TEMPEST=false 2026-03-28 02:24:16.569143 | orchestrator | ++ TEMPEST=false 2026-03-28 02:24:16.569155 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 02:24:16.569166 | orchestrator | ++ IS_ZUUL=true 2026-03-28 02:24:16.569178 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:24:16.569189 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:24:16.569200 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 02:24:16.569212 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 02:24:16.569223 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 02:24:16.569235 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 02:24:16.569275 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 02:24:16.569286 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 02:24:16.569298 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 02:24:16.569308 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 02:24:16.569319 | orchestrator | + echo 2026-03-28 02:24:16.569331 | orchestrator | + echo '# PULL IMAGES' 2026-03-28 02:24:16.569342 | orchestrator | + echo 2026-03-28 02:24:16.569364 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 02:24:16.629917 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 02:24:16.630094 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-28 02:24:18.623641 | orchestrator | 2026-03-28 02:24:18 | INFO  | Trying to run play pull-images in environment custom 2026-03-28 02:24:28.778499 | orchestrator | 2026-03-28 02:24:28 | INFO  | Task ec36c443-86d0-44e4-9254-506bf5931fe3 (pull-images) was prepared for execution. 2026-03-28 02:24:28.778623 | orchestrator | 2026-03-28 02:24:28 | INFO  | Task ec36c443-86d0-44e4-9254-506bf5931fe3 is running in background. No more output. Check ARA for logs. 2026-03-28 02:24:29.112122 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-28 02:24:41.178562 | orchestrator | 2026-03-28 02:24:41 | INFO  | Task a3de3512-2347-451b-900e-a321d0478872 (cgit) was prepared for execution. 2026-03-28 02:24:41.178668 | orchestrator | 2026-03-28 02:24:41 | INFO  | Task a3de3512-2347-451b-900e-a321d0478872 is running in background. No more output. Check ARA for logs. 2026-03-28 02:24:53.909814 | orchestrator | 2026-03-28 02:24:53 | INFO  | Task a14e70a1-d0f0-4dfe-bc5e-4a8c491318f0 (dotfiles) was prepared for execution. 2026-03-28 02:24:53.909925 | orchestrator | 2026-03-28 02:24:53 | INFO  | Task a14e70a1-d0f0-4dfe-bc5e-4a8c491318f0 is running in background. No more output. Check ARA for logs. 2026-03-28 02:25:06.774886 | orchestrator | 2026-03-28 02:25:06 | INFO  | Task 5bea5d5e-1b5a-40dd-9603-dd934715b8b1 (homer) was prepared for execution. 2026-03-28 02:25:06.774999 | orchestrator | 2026-03-28 02:25:06 | INFO  | Task 5bea5d5e-1b5a-40dd-9603-dd934715b8b1 is running in background. No more output. Check ARA for logs. 2026-03-28 02:25:19.321944 | orchestrator | 2026-03-28 02:25:19 | INFO  | Task f8ae6bad-85d6-4019-b837-e3f1cf3a4cb3 (phpmyadmin) was prepared for execution. 2026-03-28 02:25:19.322201 | orchestrator | 2026-03-28 02:25:19 | INFO  | Task f8ae6bad-85d6-4019-b837-e3f1cf3a4cb3 is running in background. No more output. Check ARA for logs. 2026-03-28 02:25:31.747724 | orchestrator | 2026-03-28 02:25:31 | INFO  | Task 768c44e3-d6e5-429b-beea-0d2f9f013d1d (sosreport) was prepared for execution. 2026-03-28 02:25:31.747822 | orchestrator | 2026-03-28 02:25:31 | INFO  | Task 768c44e3-d6e5-429b-beea-0d2f9f013d1d is running in background. No more output. Check ARA for logs. 2026-03-28 02:25:32.059948 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-28 02:25:32.069085 | orchestrator | + set -e 2026-03-28 02:25:32.069312 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 02:25:32.069333 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 02:25:32.069347 | orchestrator | ++ INTERACTIVE=false 2026-03-28 02:25:32.069361 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 02:25:32.069373 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 02:25:32.069385 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 02:25:32.069558 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 02:25:32.069573 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 02:25:32.069584 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 02:25:32.069595 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 02:25:32.069607 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 02:25:32.069618 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 02:25:32.069630 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 02:25:32.069641 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 02:25:32.069652 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 02:25:32.069663 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 02:25:32.069674 | orchestrator | ++ export ARA=false 2026-03-28 02:25:32.069686 | orchestrator | ++ ARA=false 2026-03-28 02:25:32.069697 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 02:25:32.069734 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 02:25:32.069746 | orchestrator | ++ export TEMPEST=false 2026-03-28 02:25:32.069757 | orchestrator | ++ TEMPEST=false 2026-03-28 02:25:32.069768 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 02:25:32.069779 | orchestrator | ++ IS_ZUUL=true 2026-03-28 02:25:32.069805 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:25:32.069824 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 02:25:32.069835 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 02:25:32.069846 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 02:25:32.069857 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 02:25:32.069867 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 02:25:32.069919 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 02:25:32.069931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 02:25:32.069942 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 02:25:32.069953 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 02:25:32.070074 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-28 02:25:32.135853 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 02:25:32.135926 | orchestrator | + osism apply frr 2026-03-28 02:25:44.398979 | orchestrator | 2026-03-28 02:25:44 | INFO  | Task 5fd51b1c-066a-4e0c-96db-bac552bf296e (frr) was prepared for execution. 2026-03-28 02:25:44.400865 | orchestrator | 2026-03-28 02:25:44 | INFO  | It takes a moment until task 5fd51b1c-066a-4e0c-96db-bac552bf296e (frr) has been started and output is visible here. 2026-03-28 02:26:16.091845 | orchestrator | 2026-03-28 02:26:16.091972 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-28 02:26:16.091990 | orchestrator | 2026-03-28 02:26:16.092014 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-28 02:26:16.092034 | orchestrator | Saturday 28 March 2026 02:25:51 +0000 (0:00:00.467) 0:00:00.467 ******** 2026-03-28 02:26:16.092046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:26:16.092059 | orchestrator | 2026-03-28 02:26:16.092070 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-28 02:26:16.092082 | orchestrator | Saturday 28 March 2026 02:25:51 +0000 (0:00:00.245) 0:00:00.713 ******** 2026-03-28 02:26:16.092093 | orchestrator | changed: [testbed-manager] 2026-03-28 02:26:16.092105 | orchestrator | 2026-03-28 02:26:16.092116 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-28 02:26:16.092129 | orchestrator | Saturday 28 March 2026 02:25:54 +0000 (0:00:02.778) 0:00:03.491 ******** 2026-03-28 02:26:16.092195 | orchestrator | changed: [testbed-manager] 2026-03-28 02:26:16.092215 | orchestrator | 2026-03-28 02:26:16.092226 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-28 02:26:16.092238 | orchestrator | Saturday 28 March 2026 02:26:05 +0000 (0:00:11.424) 0:00:14.916 ******** 2026-03-28 02:26:16.092253 | orchestrator | ok: [testbed-manager] 2026-03-28 02:26:16.092273 | orchestrator | 2026-03-28 02:26:16.092291 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-28 02:26:16.092308 | orchestrator | Saturday 28 March 2026 02:26:07 +0000 (0:00:01.834) 0:00:16.751 ******** 2026-03-28 02:26:16.092325 | orchestrator | changed: [testbed-manager] 2026-03-28 02:26:16.092343 | orchestrator | 2026-03-28 02:26:16.092378 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-28 02:26:16.092398 | orchestrator | Saturday 28 March 2026 02:26:08 +0000 (0:00:00.868) 0:00:17.619 ******** 2026-03-28 02:26:16.092417 | orchestrator | ok: [testbed-manager] 2026-03-28 02:26:16.092435 | orchestrator | 2026-03-28 02:26:16.092455 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-28 02:26:16.092477 | orchestrator | Saturday 28 March 2026 02:26:09 +0000 (0:00:01.635) 0:00:19.255 ******** 2026-03-28 02:26:16.092497 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:26:16.092518 | orchestrator | 2026-03-28 02:26:16.092538 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-28 02:26:16.092559 | orchestrator | Saturday 28 March 2026 02:26:10 +0000 (0:00:00.115) 0:00:19.370 ******** 2026-03-28 02:26:16.092596 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:26:16.092609 | orchestrator | 2026-03-28 02:26:16.092620 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-28 02:26:16.092631 | orchestrator | Saturday 28 March 2026 02:26:10 +0000 (0:00:00.126) 0:00:19.497 ******** 2026-03-28 02:26:16.092642 | orchestrator | changed: [testbed-manager] 2026-03-28 02:26:16.092653 | orchestrator | 2026-03-28 02:26:16.092665 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-28 02:26:16.092676 | orchestrator | Saturday 28 March 2026 02:26:10 +0000 (0:00:00.767) 0:00:20.265 ******** 2026-03-28 02:26:16.092687 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-28 02:26:16.092698 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-28 02:26:16.092717 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-28 02:26:16.092736 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-28 02:26:16.092754 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-28 02:26:16.092773 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-28 02:26:16.092790 | orchestrator | 2026-03-28 02:26:16.092807 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-28 02:26:16.092826 | orchestrator | Saturday 28 March 2026 02:26:12 +0000 (0:00:02.057) 0:00:22.322 ******** 2026-03-28 02:26:16.092844 | orchestrator | ok: [testbed-manager] 2026-03-28 02:26:16.092864 | orchestrator | 2026-03-28 02:26:16.092883 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-28 02:26:16.092902 | orchestrator | Saturday 28 March 2026 02:26:14 +0000 (0:00:01.499) 0:00:23.822 ******** 2026-03-28 02:26:16.092915 | orchestrator | changed: [testbed-manager] 2026-03-28 02:26:16.092926 | orchestrator | 2026-03-28 02:26:16.092936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:26:16.092948 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:26:16.092959 | orchestrator | 2026-03-28 02:26:16.092970 | orchestrator | 2026-03-28 02:26:16.092990 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:26:16.093002 | orchestrator | Saturday 28 March 2026 02:26:15 +0000 (0:00:01.298) 0:00:25.120 ******** 2026-03-28 02:26:16.093013 | orchestrator | =============================================================================== 2026-03-28 02:26:16.093024 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.42s 2026-03-28 02:26:16.093035 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.78s 2026-03-28 02:26:16.093046 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.06s 2026-03-28 02:26:16.093056 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.83s 2026-03-28 02:26:16.093067 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.64s 2026-03-28 02:26:16.093100 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.50s 2026-03-28 02:26:16.093112 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.30s 2026-03-28 02:26:16.093122 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.87s 2026-03-28 02:26:16.093162 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.77s 2026-03-28 02:26:16.093174 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2026-03-28 02:26:16.093185 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-03-28 02:26:16.093196 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-03-28 02:26:16.435339 | orchestrator | + osism apply kubernetes 2026-03-28 02:26:18.478738 | orchestrator | 2026-03-28 02:26:18 | INFO  | Task 20f841bd-de18-4348-b94c-072d1e1457d1 (kubernetes) was prepared for execution. 2026-03-28 02:26:18.478850 | orchestrator | 2026-03-28 02:26:18 | INFO  | It takes a moment until task 20f841bd-de18-4348-b94c-072d1e1457d1 (kubernetes) has been started and output is visible here. 2026-03-28 02:26:44.964094 | orchestrator | 2026-03-28 02:26:44.964279 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-28 02:26:44.964301 | orchestrator | 2026-03-28 02:26:44.964313 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-28 02:26:44.964326 | orchestrator | Saturday 28 March 2026 02:26:23 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-03-28 02:26:44.964337 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:26:44.964349 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:26:44.964360 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:26:44.964371 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:26:44.964382 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:26:44.964393 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:26:44.964404 | orchestrator | 2026-03-28 02:26:44.964415 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-28 02:26:44.964426 | orchestrator | Saturday 28 March 2026 02:26:24 +0000 (0:00:00.911) 0:00:01.108 ******** 2026-03-28 02:26:44.964442 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.964461 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.964477 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.964495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.964515 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.964533 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.964548 | orchestrator | 2026-03-28 02:26:44.964559 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-28 02:26:44.964573 | orchestrator | Saturday 28 March 2026 02:26:24 +0000 (0:00:00.607) 0:00:01.715 ******** 2026-03-28 02:26:44.964585 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.964596 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.964607 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.964620 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.964632 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.964644 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.964657 | orchestrator | 2026-03-28 02:26:44.964670 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-28 02:26:44.964683 | orchestrator | Saturday 28 March 2026 02:26:25 +0000 (0:00:00.934) 0:00:02.649 ******** 2026-03-28 02:26:44.964695 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:26:44.964707 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:26:44.964721 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:26:44.964738 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:26:44.964751 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:26:44.964763 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:26:44.964775 | orchestrator | 2026-03-28 02:26:44.964787 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-28 02:26:44.964800 | orchestrator | Saturday 28 March 2026 02:26:28 +0000 (0:00:02.721) 0:00:05.371 ******** 2026-03-28 02:26:44.964813 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:26:44.964825 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:26:44.964838 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:26:44.964850 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:26:44.964863 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:26:44.964876 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:26:44.964889 | orchestrator | 2026-03-28 02:26:44.964902 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-28 02:26:44.964914 | orchestrator | Saturday 28 March 2026 02:26:30 +0000 (0:00:02.054) 0:00:07.426 ******** 2026-03-28 02:26:44.964927 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:26:44.964962 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:26:44.964975 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:26:44.964988 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:26:44.964999 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:26:44.965010 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:26:44.965021 | orchestrator | 2026-03-28 02:26:44.965041 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-28 02:26:44.965052 | orchestrator | Saturday 28 March 2026 02:26:32 +0000 (0:00:01.507) 0:00:08.933 ******** 2026-03-28 02:26:44.965063 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965074 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965085 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965096 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.965106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.965117 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.965128 | orchestrator | 2026-03-28 02:26:44.965139 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-28 02:26:44.965149 | orchestrator | Saturday 28 March 2026 02:26:32 +0000 (0:00:00.713) 0:00:09.647 ******** 2026-03-28 02:26:44.965213 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965228 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965239 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965250 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.965261 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.965271 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.965282 | orchestrator | 2026-03-28 02:26:44.965293 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-28 02:26:44.965304 | orchestrator | Saturday 28 March 2026 02:26:33 +0000 (0:00:00.921) 0:00:10.568 ******** 2026-03-28 02:26:44.965314 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965325 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965336 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965347 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965358 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965368 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965379 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965390 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965401 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965412 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965442 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.965464 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965475 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965486 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.965497 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 02:26:44.965508 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 02:26:44.965518 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.965529 | orchestrator | 2026-03-28 02:26:44.965540 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-28 02:26:44.965551 | orchestrator | Saturday 28 March 2026 02:26:34 +0000 (0:00:00.670) 0:00:11.239 ******** 2026-03-28 02:26:44.965562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965572 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965583 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.965614 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.965624 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.965635 | orchestrator | 2026-03-28 02:26:44.965646 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-28 02:26:44.965657 | orchestrator | Saturday 28 March 2026 02:26:35 +0000 (0:00:01.329) 0:00:12.570 ******** 2026-03-28 02:26:44.965668 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:26:44.965679 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:26:44.965690 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:26:44.965700 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:26:44.965711 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:26:44.965721 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:26:44.965732 | orchestrator | 2026-03-28 02:26:44.965743 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-28 02:26:44.965754 | orchestrator | Saturday 28 March 2026 02:26:36 +0000 (0:00:00.918) 0:00:13.489 ******** 2026-03-28 02:26:44.965764 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:26:44.965775 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:26:44.965786 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:26:44.965796 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:26:44.965807 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:26:44.965817 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:26:44.965828 | orchestrator | 2026-03-28 02:26:44.965839 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-28 02:26:44.965850 | orchestrator | Saturday 28 March 2026 02:26:41 +0000 (0:00:04.672) 0:00:18.161 ******** 2026-03-28 02:26:44.965860 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965877 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965889 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965900 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.965910 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.965921 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.965932 | orchestrator | 2026-03-28 02:26:44.965942 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-28 02:26:44.965953 | orchestrator | Saturday 28 March 2026 02:26:42 +0000 (0:00:00.951) 0:00:19.113 ******** 2026-03-28 02:26:44.965964 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.965975 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.965985 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.965996 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.966006 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.966074 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.966087 | orchestrator | 2026-03-28 02:26:44.966098 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-28 02:26:44.966111 | orchestrator | Saturday 28 March 2026 02:26:43 +0000 (0:00:01.223) 0:00:20.336 ******** 2026-03-28 02:26:44.966122 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.966133 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.966144 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.966155 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.966187 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.966199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.966209 | orchestrator | 2026-03-28 02:26:44.966220 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-28 02:26:44.966231 | orchestrator | Saturday 28 March 2026 02:26:44 +0000 (0:00:00.600) 0:00:20.937 ******** 2026-03-28 02:26:44.966242 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-28 02:26:44.966260 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-28 02:26:44.966271 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:26:44.966282 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-28 02:26:44.966300 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-28 02:26:44.966311 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:26:44.966322 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-28 02:26:44.966332 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-28 02:26:44.966344 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:26:44.966354 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-28 02:26:44.966365 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-28 02:26:44.966376 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:26:44.966387 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-28 02:26:44.966398 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-28 02:26:44.966408 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:26:44.966419 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-28 02:26:44.966430 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-28 02:26:44.966441 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:26:44.966452 | orchestrator | 2026-03-28 02:26:44.966463 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-28 02:26:44.966482 | orchestrator | Saturday 28 March 2026 02:26:44 +0000 (0:00:00.908) 0:00:21.845 ******** 2026-03-28 02:28:00.261687 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:28:00.261810 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:28:00.261826 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:28:00.261839 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.261850 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.261862 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.261874 | orchestrator | 2026-03-28 02:28:00.261887 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-28 02:28:00.261900 | orchestrator | Saturday 28 March 2026 02:26:45 +0000 (0:00:00.609) 0:00:22.454 ******** 2026-03-28 02:28:00.261912 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:28:00.261940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:28:00.261953 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:28:00.261965 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.261977 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.261988 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.262001 | orchestrator | 2026-03-28 02:28:00.262073 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-28 02:28:00.262088 | orchestrator | 2026-03-28 02:28:00.262101 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-28 02:28:00.262114 | orchestrator | Saturday 28 March 2026 02:26:46 +0000 (0:00:01.233) 0:00:23.688 ******** 2026-03-28 02:28:00.262127 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.262140 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.262152 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.262234 | orchestrator | 2026-03-28 02:28:00.262248 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-28 02:28:00.262262 | orchestrator | Saturday 28 March 2026 02:26:48 +0000 (0:00:01.578) 0:00:25.266 ******** 2026-03-28 02:28:00.262276 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.262288 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.262300 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.262314 | orchestrator | 2026-03-28 02:28:00.262326 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-28 02:28:00.262339 | orchestrator | Saturday 28 March 2026 02:26:49 +0000 (0:00:01.209) 0:00:26.475 ******** 2026-03-28 02:28:00.262353 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.262366 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.262379 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.262392 | orchestrator | 2026-03-28 02:28:00.262404 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-28 02:28:00.262417 | orchestrator | Saturday 28 March 2026 02:26:50 +0000 (0:00:01.024) 0:00:27.499 ******** 2026-03-28 02:28:00.262454 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.262467 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.262479 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.262491 | orchestrator | 2026-03-28 02:28:00.262502 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-28 02:28:00.262514 | orchestrator | Saturday 28 March 2026 02:26:51 +0000 (0:00:00.825) 0:00:28.325 ******** 2026-03-28 02:28:00.262527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.262539 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.262551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.262563 | orchestrator | 2026-03-28 02:28:00.262576 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-28 02:28:00.262607 | orchestrator | Saturday 28 March 2026 02:26:51 +0000 (0:00:00.408) 0:00:28.734 ******** 2026-03-28 02:28:00.262619 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.262631 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:00.262643 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:00.262655 | orchestrator | 2026-03-28 02:28:00.262667 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-28 02:28:00.262679 | orchestrator | Saturday 28 March 2026 02:26:52 +0000 (0:00:00.976) 0:00:29.710 ******** 2026-03-28 02:28:00.262691 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.262703 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:00.262715 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:00.262727 | orchestrator | 2026-03-28 02:28:00.262738 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-28 02:28:00.262750 | orchestrator | Saturday 28 March 2026 02:26:54 +0000 (0:00:01.446) 0:00:31.157 ******** 2026-03-28 02:28:00.262763 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:28:00.262777 | orchestrator | 2026-03-28 02:28:00.262789 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-28 02:28:00.262801 | orchestrator | Saturday 28 March 2026 02:26:54 +0000 (0:00:00.494) 0:00:31.651 ******** 2026-03-28 02:28:00.262813 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.262825 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.262836 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.262848 | orchestrator | 2026-03-28 02:28:00.262860 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-28 02:28:00.262873 | orchestrator | Saturday 28 March 2026 02:26:56 +0000 (0:00:01.938) 0:00:33.589 ******** 2026-03-28 02:28:00.262884 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.262896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.262907 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.262919 | orchestrator | 2026-03-28 02:28:00.262931 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-28 02:28:00.262943 | orchestrator | Saturday 28 March 2026 02:26:57 +0000 (0:00:00.602) 0:00:34.192 ******** 2026-03-28 02:28:00.262955 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.262967 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.262979 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.262991 | orchestrator | 2026-03-28 02:28:00.263003 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-28 02:28:00.263014 | orchestrator | Saturday 28 March 2026 02:26:58 +0000 (0:00:00.994) 0:00:35.186 ******** 2026-03-28 02:28:00.263025 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.263035 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.263047 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.263058 | orchestrator | 2026-03-28 02:28:00.263069 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-28 02:28:00.263104 | orchestrator | Saturday 28 March 2026 02:26:59 +0000 (0:00:01.463) 0:00:36.650 ******** 2026-03-28 02:28:00.263116 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.263139 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.263150 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.263188 | orchestrator | 2026-03-28 02:28:00.263200 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-28 02:28:00.263211 | orchestrator | Saturday 28 March 2026 02:27:00 +0000 (0:00:00.360) 0:00:37.011 ******** 2026-03-28 02:28:00.263222 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.263233 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.263244 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.263255 | orchestrator | 2026-03-28 02:28:00.263266 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-28 02:28:00.263277 | orchestrator | Saturday 28 March 2026 02:27:00 +0000 (0:00:00.555) 0:00:37.566 ******** 2026-03-28 02:28:00.263289 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:00.263300 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:00.263312 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:00.263323 | orchestrator | 2026-03-28 02:28:00.263342 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-28 02:28:00.263354 | orchestrator | Saturday 28 March 2026 02:27:01 +0000 (0:00:01.068) 0:00:38.635 ******** 2026-03-28 02:28:00.263365 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.263376 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.263388 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.263399 | orchestrator | 2026-03-28 02:28:00.263410 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-28 02:28:00.263422 | orchestrator | Saturday 28 March 2026 02:27:04 +0000 (0:00:02.895) 0:00:41.530 ******** 2026-03-28 02:28:00.263434 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.263446 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.263458 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.263474 | orchestrator | 2026-03-28 02:28:00.263486 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-28 02:28:00.263499 | orchestrator | Saturday 28 March 2026 02:27:04 +0000 (0:00:00.357) 0:00:41.887 ******** 2026-03-28 02:28:00.263511 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 02:28:00.263524 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 02:28:00.263536 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 02:28:00.263547 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 02:28:00.263559 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 02:28:00.263571 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 02:28:00.263582 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 02:28:00.263594 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 02:28:00.263605 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 02:28:00.263617 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 02:28:00.263628 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 02:28:00.263649 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 02:28:00.263661 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-28 02:28:00.263672 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-28 02:28:00.263684 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-28 02:28:00.263695 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:00.263707 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:00.263719 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:00.263730 | orchestrator | 2026-03-28 02:28:00.263747 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-28 02:28:00.263759 | orchestrator | Saturday 28 March 2026 02:27:58 +0000 (0:00:53.972) 0:01:35.860 ******** 2026-03-28 02:28:00.263771 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:00.263782 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:00.263794 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:00.263805 | orchestrator | 2026-03-28 02:28:00.263817 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-28 02:28:00.263828 | orchestrator | Saturday 28 March 2026 02:27:59 +0000 (0:00:00.301) 0:01:36.161 ******** 2026-03-28 02:28:00.263850 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.534865 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.534980 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.534996 | orchestrator | 2026-03-28 02:28:41.535009 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-28 02:28:41.535023 | orchestrator | Saturday 28 March 2026 02:28:00 +0000 (0:00:00.985) 0:01:37.146 ******** 2026-03-28 02:28:41.535034 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535045 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535056 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535068 | orchestrator | 2026-03-28 02:28:41.535079 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-28 02:28:41.535144 | orchestrator | Saturday 28 March 2026 02:28:01 +0000 (0:00:01.251) 0:01:38.398 ******** 2026-03-28 02:28:41.535157 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535168 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535180 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535191 | orchestrator | 2026-03-28 02:28:41.535202 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-28 02:28:41.535213 | orchestrator | Saturday 28 March 2026 02:28:26 +0000 (0:00:25.450) 0:02:03.849 ******** 2026-03-28 02:28:41.535224 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.535236 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.535247 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.535258 | orchestrator | 2026-03-28 02:28:41.535269 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-28 02:28:41.535281 | orchestrator | Saturday 28 March 2026 02:28:27 +0000 (0:00:00.627) 0:02:04.476 ******** 2026-03-28 02:28:41.535292 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.535304 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.535315 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.535326 | orchestrator | 2026-03-28 02:28:41.535337 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-28 02:28:41.535348 | orchestrator | Saturday 28 March 2026 02:28:28 +0000 (0:00:00.641) 0:02:05.118 ******** 2026-03-28 02:28:41.535359 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535370 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535381 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535392 | orchestrator | 2026-03-28 02:28:41.535405 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-28 02:28:41.535443 | orchestrator | Saturday 28 March 2026 02:28:28 +0000 (0:00:00.638) 0:02:05.756 ******** 2026-03-28 02:28:41.535456 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.535469 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.535482 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.535494 | orchestrator | 2026-03-28 02:28:41.535507 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-28 02:28:41.535520 | orchestrator | Saturday 28 March 2026 02:28:29 +0000 (0:00:00.817) 0:02:06.573 ******** 2026-03-28 02:28:41.535532 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.535544 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.535557 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.535569 | orchestrator | 2026-03-28 02:28:41.535582 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-28 02:28:41.535595 | orchestrator | Saturday 28 March 2026 02:28:29 +0000 (0:00:00.323) 0:02:06.897 ******** 2026-03-28 02:28:41.535608 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535621 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535633 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535646 | orchestrator | 2026-03-28 02:28:41.535658 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-28 02:28:41.535671 | orchestrator | Saturday 28 March 2026 02:28:30 +0000 (0:00:00.651) 0:02:07.549 ******** 2026-03-28 02:28:41.535684 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535696 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535710 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535722 | orchestrator | 2026-03-28 02:28:41.535735 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-28 02:28:41.535749 | orchestrator | Saturday 28 March 2026 02:28:31 +0000 (0:00:00.627) 0:02:08.177 ******** 2026-03-28 02:28:41.535760 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535771 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535782 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535793 | orchestrator | 2026-03-28 02:28:41.535805 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-28 02:28:41.535816 | orchestrator | Saturday 28 March 2026 02:28:32 +0000 (0:00:00.908) 0:02:09.086 ******** 2026-03-28 02:28:41.535829 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:28:41.535840 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:28:41.535851 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:28:41.535862 | orchestrator | 2026-03-28 02:28:41.535874 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-28 02:28:41.535885 | orchestrator | Saturday 28 March 2026 02:28:33 +0000 (0:00:01.137) 0:02:10.223 ******** 2026-03-28 02:28:41.535896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:41.535907 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:41.535932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:41.535955 | orchestrator | 2026-03-28 02:28:41.535966 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-28 02:28:41.535977 | orchestrator | Saturday 28 March 2026 02:28:33 +0000 (0:00:00.271) 0:02:10.495 ******** 2026-03-28 02:28:41.535988 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:28:41.535999 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:28:41.536010 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:28:41.536021 | orchestrator | 2026-03-28 02:28:41.536032 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-28 02:28:41.536044 | orchestrator | Saturday 28 March 2026 02:28:33 +0000 (0:00:00.295) 0:02:10.790 ******** 2026-03-28 02:28:41.536055 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.536066 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.536077 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.536088 | orchestrator | 2026-03-28 02:28:41.536120 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-28 02:28:41.536132 | orchestrator | Saturday 28 March 2026 02:28:34 +0000 (0:00:00.618) 0:02:11.409 ******** 2026-03-28 02:28:41.536152 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:28:41.536164 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:28:41.536192 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:28:41.536204 | orchestrator | 2026-03-28 02:28:41.536216 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-28 02:28:41.536228 | orchestrator | Saturday 28 March 2026 02:28:35 +0000 (0:00:00.875) 0:02:12.284 ******** 2026-03-28 02:28:41.536239 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 02:28:41.536251 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 02:28:41.536261 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 02:28:41.536272 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 02:28:41.536283 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 02:28:41.536294 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 02:28:41.536304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 02:28:41.536316 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 02:28:41.536327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 02:28:41.536338 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-28 02:28:41.536349 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 02:28:41.536360 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 02:28:41.536370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-28 02:28:41.536381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 02:28:41.536392 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 02:28:41.536402 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 02:28:41.536413 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 02:28:41.536424 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 02:28:41.536435 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 02:28:41.536446 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 02:28:41.536457 | orchestrator | 2026-03-28 02:28:41.536468 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-28 02:28:41.536478 | orchestrator | 2026-03-28 02:28:41.536489 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-28 02:28:41.536500 | orchestrator | Saturday 28 March 2026 02:28:38 +0000 (0:00:03.091) 0:02:15.376 ******** 2026-03-28 02:28:41.536511 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:28:41.536521 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:28:41.536532 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:28:41.536543 | orchestrator | 2026-03-28 02:28:41.536571 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-28 02:28:41.536582 | orchestrator | Saturday 28 March 2026 02:28:38 +0000 (0:00:00.323) 0:02:15.700 ******** 2026-03-28 02:28:41.536593 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:28:41.536604 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:28:41.536615 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:28:41.536633 | orchestrator | 2026-03-28 02:28:41.536644 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-28 02:28:41.536655 | orchestrator | Saturday 28 March 2026 02:28:39 +0000 (0:00:00.899) 0:02:16.599 ******** 2026-03-28 02:28:41.536665 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:28:41.536676 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:28:41.536687 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:28:41.536697 | orchestrator | 2026-03-28 02:28:41.536708 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-28 02:28:41.536719 | orchestrator | Saturday 28 March 2026 02:28:40 +0000 (0:00:00.322) 0:02:16.921 ******** 2026-03-28 02:28:41.536730 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:28:41.536741 | orchestrator | 2026-03-28 02:28:41.536752 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-28 02:28:41.536763 | orchestrator | Saturday 28 March 2026 02:28:40 +0000 (0:00:00.510) 0:02:17.432 ******** 2026-03-28 02:28:41.536774 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:28:41.536785 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:28:41.536797 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:28:41.536816 | orchestrator | 2026-03-28 02:28:41.536834 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-28 02:28:41.536858 | orchestrator | Saturday 28 March 2026 02:28:41 +0000 (0:00:00.499) 0:02:17.932 ******** 2026-03-28 02:28:41.536882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:28:41.536899 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:28:41.536916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:28:41.536933 | orchestrator | 2026-03-28 02:28:41.536950 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-28 02:28:41.536967 | orchestrator | Saturday 28 March 2026 02:28:41 +0000 (0:00:00.308) 0:02:18.240 ******** 2026-03-28 02:28:41.536995 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:30:22.153254 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:30:22.153381 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:30:22.153401 | orchestrator | 2026-03-28 02:30:22.153414 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-28 02:30:22.153427 | orchestrator | Saturday 28 March 2026 02:28:41 +0000 (0:00:00.316) 0:02:18.557 ******** 2026-03-28 02:30:22.153433 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:30:22.153487 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:30:22.153495 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:30:22.153502 | orchestrator | 2026-03-28 02:30:22.153508 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-28 02:30:22.153514 | orchestrator | Saturday 28 March 2026 02:28:42 +0000 (0:00:00.627) 0:02:19.185 ******** 2026-03-28 02:30:22.153521 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:30:22.153528 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:30:22.153538 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:30:22.153551 | orchestrator | 2026-03-28 02:30:22.153563 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-28 02:30:22.153572 | orchestrator | Saturday 28 March 2026 02:28:43 +0000 (0:00:01.461) 0:02:20.646 ******** 2026-03-28 02:30:22.153582 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:30:22.153591 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:30:22.153600 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:30:22.153609 | orchestrator | 2026-03-28 02:30:22.153618 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-28 02:30:22.153628 | orchestrator | Saturday 28 March 2026 02:28:45 +0000 (0:00:01.274) 0:02:21.920 ******** 2026-03-28 02:30:22.153638 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:30:22.153647 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:30:22.153657 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:30:22.153666 | orchestrator | 2026-03-28 02:30:22.153674 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 02:30:22.153708 | orchestrator | 2026-03-28 02:30:22.153718 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 02:30:22.153727 | orchestrator | Saturday 28 March 2026 02:28:55 +0000 (0:00:10.048) 0:02:31.969 ******** 2026-03-28 02:30:22.153737 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.153748 | orchestrator | 2026-03-28 02:30:22.153758 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 02:30:22.153767 | orchestrator | Saturday 28 March 2026 02:28:55 +0000 (0:00:00.826) 0:02:32.795 ******** 2026-03-28 02:30:22.153773 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.153779 | orchestrator | 2026-03-28 02:30:22.153786 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 02:30:22.153792 | orchestrator | Saturday 28 March 2026 02:28:56 +0000 (0:00:00.675) 0:02:33.471 ******** 2026-03-28 02:30:22.153799 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 02:30:22.153806 | orchestrator | 2026-03-28 02:30:22.153813 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 02:30:22.153820 | orchestrator | Saturday 28 March 2026 02:28:57 +0000 (0:00:00.529) 0:02:34.000 ******** 2026-03-28 02:30:22.153826 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.153833 | orchestrator | 2026-03-28 02:30:22.153839 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 02:30:22.153847 | orchestrator | Saturday 28 March 2026 02:28:58 +0000 (0:00:00.904) 0:02:34.905 ******** 2026-03-28 02:30:22.153856 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.153866 | orchestrator | 2026-03-28 02:30:22.153875 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 02:30:22.153884 | orchestrator | Saturday 28 March 2026 02:28:58 +0000 (0:00:00.602) 0:02:35.508 ******** 2026-03-28 02:30:22.153894 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 02:30:22.153903 | orchestrator | 2026-03-28 02:30:22.153912 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 02:30:22.153922 | orchestrator | Saturday 28 March 2026 02:29:00 +0000 (0:00:01.584) 0:02:37.093 ******** 2026-03-28 02:30:22.153932 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 02:30:22.153943 | orchestrator | 2026-03-28 02:30:22.153997 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 02:30:22.154010 | orchestrator | Saturday 28 March 2026 02:29:01 +0000 (0:00:00.847) 0:02:37.940 ******** 2026-03-28 02:30:22.154067 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.154080 | orchestrator | 2026-03-28 02:30:22.154091 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 02:30:22.154100 | orchestrator | Saturday 28 March 2026 02:29:01 +0000 (0:00:00.439) 0:02:38.380 ******** 2026-03-28 02:30:22.154109 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.154119 | orchestrator | 2026-03-28 02:30:22.154127 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-28 02:30:22.154133 | orchestrator | 2026-03-28 02:30:22.154139 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-28 02:30:22.154145 | orchestrator | Saturday 28 March 2026 02:29:01 +0000 (0:00:00.455) 0:02:38.836 ******** 2026-03-28 02:30:22.154151 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.154157 | orchestrator | 2026-03-28 02:30:22.154167 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-28 02:30:22.154176 | orchestrator | Saturday 28 March 2026 02:29:02 +0000 (0:00:00.373) 0:02:39.210 ******** 2026-03-28 02:30:22.154186 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:30:22.154197 | orchestrator | 2026-03-28 02:30:22.154205 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-28 02:30:22.154214 | orchestrator | Saturday 28 March 2026 02:29:02 +0000 (0:00:00.236) 0:02:39.446 ******** 2026-03-28 02:30:22.154224 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.154234 | orchestrator | 2026-03-28 02:30:22.154255 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-28 02:30:22.154265 | orchestrator | Saturday 28 March 2026 02:29:03 +0000 (0:00:00.853) 0:02:40.299 ******** 2026-03-28 02:30:22.154275 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.154285 | orchestrator | 2026-03-28 02:30:22.154318 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-28 02:30:22.154327 | orchestrator | Saturday 28 March 2026 02:29:05 +0000 (0:00:01.674) 0:02:41.974 ******** 2026-03-28 02:30:22.154333 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.154338 | orchestrator | 2026-03-28 02:30:22.154344 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-28 02:30:22.154350 | orchestrator | Saturday 28 March 2026 02:29:05 +0000 (0:00:00.834) 0:02:42.809 ******** 2026-03-28 02:30:22.154356 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.154361 | orchestrator | 2026-03-28 02:30:22.154367 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-28 02:30:22.154373 | orchestrator | Saturday 28 March 2026 02:29:06 +0000 (0:00:00.459) 0:02:43.268 ******** 2026-03-28 02:30:22.154378 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.154384 | orchestrator | 2026-03-28 02:30:22.154390 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-28 02:30:22.154395 | orchestrator | Saturday 28 March 2026 02:29:14 +0000 (0:00:07.953) 0:02:51.222 ******** 2026-03-28 02:30:22.154401 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:22.154409 | orchestrator | 2026-03-28 02:30:22.154419 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-28 02:30:22.154428 | orchestrator | Saturday 28 March 2026 02:29:27 +0000 (0:00:13.284) 0:03:04.506 ******** 2026-03-28 02:30:22.154438 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:22.154447 | orchestrator | 2026-03-28 02:30:22.154457 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-28 02:30:22.154467 | orchestrator | 2026-03-28 02:30:22.154476 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-28 02:30:22.154564 | orchestrator | Saturday 28 March 2026 02:29:28 +0000 (0:00:00.789) 0:03:05.295 ******** 2026-03-28 02:30:22.154575 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:30:22.154585 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:30:22.154593 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:30:22.154602 | orchestrator | 2026-03-28 02:30:22.154611 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-28 02:30:22.154622 | orchestrator | Saturday 28 March 2026 02:29:28 +0000 (0:00:00.311) 0:03:05.607 ******** 2026-03-28 02:30:22.154631 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154641 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:30:22.154650 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:30:22.154660 | orchestrator | 2026-03-28 02:30:22.154670 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-28 02:30:22.154676 | orchestrator | Saturday 28 March 2026 02:29:29 +0000 (0:00:00.349) 0:03:05.956 ******** 2026-03-28 02:30:22.154682 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:30:22.154688 | orchestrator | 2026-03-28 02:30:22.154694 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-28 02:30:22.154699 | orchestrator | Saturday 28 March 2026 02:29:29 +0000 (0:00:00.727) 0:03:06.683 ******** 2026-03-28 02:30:22.154705 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 02:30:22.154711 | orchestrator | 2026-03-28 02:30:22.154717 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-28 02:30:22.154722 | orchestrator | Saturday 28 March 2026 02:29:30 +0000 (0:00:00.841) 0:03:07.525 ******** 2026-03-28 02:30:22.154728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 02:30:22.154734 | orchestrator | 2026-03-28 02:30:22.154740 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-28 02:30:22.154754 | orchestrator | Saturday 28 March 2026 02:29:31 +0000 (0:00:00.879) 0:03:08.405 ******** 2026-03-28 02:30:22.154760 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154765 | orchestrator | 2026-03-28 02:30:22.154771 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-28 02:30:22.154777 | orchestrator | Saturday 28 March 2026 02:29:31 +0000 (0:00:00.110) 0:03:08.516 ******** 2026-03-28 02:30:22.154782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 02:30:22.154788 | orchestrator | 2026-03-28 02:30:22.154793 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-28 02:30:22.154799 | orchestrator | Saturday 28 March 2026 02:29:32 +0000 (0:00:00.976) 0:03:09.493 ******** 2026-03-28 02:30:22.154805 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154810 | orchestrator | 2026-03-28 02:30:22.154816 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-28 02:30:22.154822 | orchestrator | Saturday 28 March 2026 02:29:32 +0000 (0:00:00.122) 0:03:09.615 ******** 2026-03-28 02:30:22.154828 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154833 | orchestrator | 2026-03-28 02:30:22.154839 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-28 02:30:22.154845 | orchestrator | Saturday 28 March 2026 02:29:32 +0000 (0:00:00.151) 0:03:09.766 ******** 2026-03-28 02:30:22.154850 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154856 | orchestrator | 2026-03-28 02:30:22.154862 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-28 02:30:22.154874 | orchestrator | Saturday 28 March 2026 02:29:32 +0000 (0:00:00.121) 0:03:09.887 ******** 2026-03-28 02:30:22.154880 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:22.154886 | orchestrator | 2026-03-28 02:30:22.154892 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-28 02:30:22.154898 | orchestrator | Saturday 28 March 2026 02:29:33 +0000 (0:00:00.138) 0:03:10.026 ******** 2026-03-28 02:30:22.154903 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 02:30:22.154909 | orchestrator | 2026-03-28 02:30:22.154915 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-28 02:30:22.154920 | orchestrator | Saturday 28 March 2026 02:29:39 +0000 (0:00:06.829) 0:03:16.855 ******** 2026-03-28 02:30:22.154926 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-28 02:30:22.154932 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-28 02:30:22.154947 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-28 02:30:46.113331 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-28 02:30:46.113450 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-28 02:30:46.113462 | orchestrator | 2026-03-28 02:30:46.113470 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-28 02:30:46.113477 | orchestrator | Saturday 28 March 2026 02:30:22 +0000 (0:00:42.183) 0:03:59.038 ******** 2026-03-28 02:30:46.113484 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 02:30:46.113521 | orchestrator | 2026-03-28 02:30:46.113529 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-28 02:30:46.113536 | orchestrator | Saturday 28 March 2026 02:30:23 +0000 (0:00:01.242) 0:04:00.281 ******** 2026-03-28 02:30:46.113544 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 02:30:46.113550 | orchestrator | 2026-03-28 02:30:46.113557 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-28 02:30:46.113563 | orchestrator | Saturday 28 March 2026 02:30:24 +0000 (0:00:01.579) 0:04:01.860 ******** 2026-03-28 02:30:46.113569 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 02:30:46.113575 | orchestrator | 2026-03-28 02:30:46.113581 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-28 02:30:46.113589 | orchestrator | Saturday 28 March 2026 02:30:26 +0000 (0:00:01.365) 0:04:03.226 ******** 2026-03-28 02:30:46.113615 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:46.113622 | orchestrator | 2026-03-28 02:30:46.113629 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-28 02:30:46.113636 | orchestrator | Saturday 28 March 2026 02:30:26 +0000 (0:00:00.147) 0:04:03.373 ******** 2026-03-28 02:30:46.113643 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-28 02:30:46.113651 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-28 02:30:46.113657 | orchestrator | 2026-03-28 02:30:46.113664 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-28 02:30:46.113671 | orchestrator | Saturday 28 March 2026 02:30:28 +0000 (0:00:01.875) 0:04:05.249 ******** 2026-03-28 02:30:46.113678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:46.113684 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:30:46.113691 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:30:46.113698 | orchestrator | 2026-03-28 02:30:46.113704 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-28 02:30:46.113711 | orchestrator | Saturday 28 March 2026 02:30:28 +0000 (0:00:00.338) 0:04:05.588 ******** 2026-03-28 02:30:46.113718 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:30:46.113725 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:30:46.113732 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:30:46.113738 | orchestrator | 2026-03-28 02:30:46.113745 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-28 02:30:46.113751 | orchestrator | 2026-03-28 02:30:46.113757 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-28 02:30:46.113763 | orchestrator | Saturday 28 March 2026 02:30:29 +0000 (0:00:00.881) 0:04:06.470 ******** 2026-03-28 02:30:46.113769 | orchestrator | ok: [testbed-manager] 2026-03-28 02:30:46.113775 | orchestrator | 2026-03-28 02:30:46.113781 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-28 02:30:46.113788 | orchestrator | Saturday 28 March 2026 02:30:29 +0000 (0:00:00.373) 0:04:06.843 ******** 2026-03-28 02:30:46.113796 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 02:30:46.113802 | orchestrator | 2026-03-28 02:30:46.113808 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-28 02:30:46.113815 | orchestrator | Saturday 28 March 2026 02:30:30 +0000 (0:00:00.238) 0:04:07.082 ******** 2026-03-28 02:30:46.113821 | orchestrator | changed: [testbed-manager] 2026-03-28 02:30:46.113828 | orchestrator | 2026-03-28 02:30:46.113835 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-28 02:30:46.113841 | orchestrator | 2026-03-28 02:30:46.113849 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-28 02:30:46.113856 | orchestrator | Saturday 28 March 2026 02:30:35 +0000 (0:00:05.672) 0:04:12.755 ******** 2026-03-28 02:30:46.113862 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:30:46.113869 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:30:46.113876 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:30:46.113882 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:30:46.113890 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:30:46.113897 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:30:46.113904 | orchestrator | 2026-03-28 02:30:46.113912 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-28 02:30:46.113920 | orchestrator | Saturday 28 March 2026 02:30:36 +0000 (0:00:00.618) 0:04:13.374 ******** 2026-03-28 02:30:46.113928 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 02:30:46.113957 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 02:30:46.113963 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 02:30:46.113970 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 02:30:46.113988 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 02:30:46.113998 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 02:30:46.114006 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 02:30:46.114054 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 02:30:46.114064 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 02:30:46.114093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 02:30:46.114103 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 02:30:46.114113 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 02:30:46.114123 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 02:30:46.114132 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 02:30:46.114142 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 02:30:46.114164 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 02:30:46.114173 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 02:30:46.114182 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 02:30:46.114191 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 02:30:46.114200 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 02:30:46.114207 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 02:30:46.114216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 02:30:46.114224 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 02:30:46.114233 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 02:30:46.114241 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 02:30:46.114249 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 02:30:46.114258 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 02:30:46.114266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 02:30:46.114273 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 02:30:46.114281 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 02:30:46.114287 | orchestrator | 2026-03-28 02:30:46.114294 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-28 02:30:46.114300 | orchestrator | Saturday 28 March 2026 02:30:44 +0000 (0:00:08.226) 0:04:21.600 ******** 2026-03-28 02:30:46.114306 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:30:46.114313 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:30:46.114318 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:30:46.114325 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:46.114332 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:30:46.114338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:30:46.114345 | orchestrator | 2026-03-28 02:30:46.114352 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-28 02:30:46.114359 | orchestrator | Saturday 28 March 2026 02:30:45 +0000 (0:00:00.603) 0:04:22.204 ******** 2026-03-28 02:30:46.114366 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:30:46.114379 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:30:46.114385 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:30:46.114391 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:30:46.114398 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:30:46.114404 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:30:46.114411 | orchestrator | 2026-03-28 02:30:46.114418 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:30:46.114425 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:30:46.114434 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 02:30:46.114440 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 02:30:46.114446 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 02:30:46.114453 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 02:30:46.114459 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 02:30:46.114466 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 02:30:46.114473 | orchestrator | 2026-03-28 02:30:46.114480 | orchestrator | 2026-03-28 02:30:46.114487 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:30:46.114494 | orchestrator | Saturday 28 March 2026 02:30:46 +0000 (0:00:00.786) 0:04:22.990 ******** 2026-03-28 02:30:46.114507 | orchestrator | =============================================================================== 2026-03-28 02:30:46.571493 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.97s 2026-03-28 02:30:46.571593 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.18s 2026-03-28 02:30:46.571609 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.45s 2026-03-28 02:30:46.571621 | orchestrator | kubectl : Install required packages ------------------------------------ 13.28s 2026-03-28 02:30:46.571632 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.05s 2026-03-28 02:30:46.571643 | orchestrator | Manage labels ----------------------------------------------------------- 8.23s 2026-03-28 02:30:46.571654 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.95s 2026-03-28 02:30:46.571665 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.83s 2026-03-28 02:30:46.571676 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.67s 2026-03-28 02:30:46.571687 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.67s 2026-03-28 02:30:46.571699 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.09s 2026-03-28 02:30:46.571712 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.90s 2026-03-28 02:30:46.571723 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.72s 2026-03-28 02:30:46.571734 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.05s 2026-03-28 02:30:46.571744 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.94s 2026-03-28 02:30:46.571755 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.88s 2026-03-28 02:30:46.571766 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.67s 2026-03-28 02:30:46.571806 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2026-03-28 02:30:46.571817 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.58s 2026-03-28 02:30:46.571828 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.58s 2026-03-28 02:30:46.924558 | orchestrator | + osism apply copy-kubeconfig 2026-03-28 02:30:59.006458 | orchestrator | 2026-03-28 02:30:58 | INFO  | Task 4d27fcf4-14de-47eb-b371-49d9d9ff145d (copy-kubeconfig) was prepared for execution. 2026-03-28 02:30:59.006560 | orchestrator | 2026-03-28 02:30:59 | INFO  | It takes a moment until task 4d27fcf4-14de-47eb-b371-49d9d9ff145d (copy-kubeconfig) has been started and output is visible here. 2026-03-28 02:31:06.303094 | orchestrator | 2026-03-28 02:31:06.303205 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-28 02:31:06.303221 | orchestrator | 2026-03-28 02:31:06.303233 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 02:31:06.303245 | orchestrator | Saturday 28 March 2026 02:31:03 +0000 (0:00:00.160) 0:00:00.160 ******** 2026-03-28 02:31:06.303256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 02:31:06.303267 | orchestrator | 2026-03-28 02:31:06.303279 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 02:31:06.303311 | orchestrator | Saturday 28 March 2026 02:31:04 +0000 (0:00:00.761) 0:00:00.921 ******** 2026-03-28 02:31:06.303323 | orchestrator | changed: [testbed-manager] 2026-03-28 02:31:06.303335 | orchestrator | 2026-03-28 02:31:06.303347 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-28 02:31:06.303358 | orchestrator | Saturday 28 March 2026 02:31:05 +0000 (0:00:01.285) 0:00:02.207 ******** 2026-03-28 02:31:06.303374 | orchestrator | changed: [testbed-manager] 2026-03-28 02:31:06.303385 | orchestrator | 2026-03-28 02:31:06.303400 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:31:06.303412 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:31:06.303424 | orchestrator | 2026-03-28 02:31:06.303436 | orchestrator | 2026-03-28 02:31:06.303447 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:31:06.303458 | orchestrator | Saturday 28 March 2026 02:31:05 +0000 (0:00:00.505) 0:00:02.712 ******** 2026-03-28 02:31:06.303469 | orchestrator | =============================================================================== 2026-03-28 02:31:06.303480 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2026-03-28 02:31:06.303491 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2026-03-28 02:31:06.303502 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2026-03-28 02:31:06.680760 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-28 02:31:18.971393 | orchestrator | 2026-03-28 02:31:18 | INFO  | Task a9278999-03f9-41da-94af-a09415af3064 (openstackclient) was prepared for execution. 2026-03-28 02:31:18.971513 | orchestrator | 2026-03-28 02:31:18 | INFO  | It takes a moment until task a9278999-03f9-41da-94af-a09415af3064 (openstackclient) has been started and output is visible here. 2026-03-28 02:32:07.855653 | orchestrator | 2026-03-28 02:32:07.855802 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-28 02:32:07.855828 | orchestrator | 2026-03-28 02:32:07.855847 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-28 02:32:07.855980 | orchestrator | Saturday 28 March 2026 02:31:23 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-03-28 02:32:07.856003 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-28 02:32:07.856023 | orchestrator | 2026-03-28 02:32:07.856075 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-28 02:32:07.856089 | orchestrator | Saturday 28 March 2026 02:31:23 +0000 (0:00:00.235) 0:00:00.466 ******** 2026-03-28 02:32:07.856100 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-28 02:32:07.856112 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-28 02:32:07.856123 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-28 02:32:07.856134 | orchestrator | 2026-03-28 02:32:07.856145 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-28 02:32:07.856156 | orchestrator | Saturday 28 March 2026 02:31:24 +0000 (0:00:01.305) 0:00:01.772 ******** 2026-03-28 02:32:07.856167 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:07.856178 | orchestrator | 2026-03-28 02:32:07.856192 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-28 02:32:07.856204 | orchestrator | Saturday 28 March 2026 02:31:26 +0000 (0:00:01.575) 0:00:03.347 ******** 2026-03-28 02:32:07.856218 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-28 02:32:07.856232 | orchestrator | ok: [testbed-manager] 2026-03-28 02:32:07.856246 | orchestrator | 2026-03-28 02:32:07.856259 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-28 02:32:07.856271 | orchestrator | Saturday 28 March 2026 02:32:02 +0000 (0:00:35.989) 0:00:39.337 ******** 2026-03-28 02:32:07.856283 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:07.856297 | orchestrator | 2026-03-28 02:32:07.856310 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-28 02:32:07.856323 | orchestrator | Saturday 28 March 2026 02:32:03 +0000 (0:00:00.966) 0:00:40.304 ******** 2026-03-28 02:32:07.856335 | orchestrator | ok: [testbed-manager] 2026-03-28 02:32:07.856348 | orchestrator | 2026-03-28 02:32:07.856360 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-28 02:32:07.856373 | orchestrator | Saturday 28 March 2026 02:32:04 +0000 (0:00:00.647) 0:00:40.952 ******** 2026-03-28 02:32:07.856386 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:07.856398 | orchestrator | 2026-03-28 02:32:07.856412 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-28 02:32:07.856425 | orchestrator | Saturday 28 March 2026 02:32:05 +0000 (0:00:01.455) 0:00:42.408 ******** 2026-03-28 02:32:07.856437 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:07.856449 | orchestrator | 2026-03-28 02:32:07.856462 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-28 02:32:07.856476 | orchestrator | Saturday 28 March 2026 02:32:06 +0000 (0:00:00.772) 0:00:43.180 ******** 2026-03-28 02:32:07.856489 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:07.856502 | orchestrator | 2026-03-28 02:32:07.856514 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-28 02:32:07.856527 | orchestrator | Saturday 28 March 2026 02:32:06 +0000 (0:00:00.603) 0:00:43.783 ******** 2026-03-28 02:32:07.856538 | orchestrator | ok: [testbed-manager] 2026-03-28 02:32:07.856549 | orchestrator | 2026-03-28 02:32:07.856559 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:32:07.856571 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:32:07.856583 | orchestrator | 2026-03-28 02:32:07.856594 | orchestrator | 2026-03-28 02:32:07.856605 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:32:07.856615 | orchestrator | Saturday 28 March 2026 02:32:07 +0000 (0:00:00.446) 0:00:44.229 ******** 2026-03-28 02:32:07.856626 | orchestrator | =============================================================================== 2026-03-28 02:32:07.856637 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.99s 2026-03-28 02:32:07.856648 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.58s 2026-03-28 02:32:07.856668 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.46s 2026-03-28 02:32:07.856679 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.31s 2026-03-28 02:32:07.856690 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.97s 2026-03-28 02:32:07.856701 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.77s 2026-03-28 02:32:07.856712 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-03-28 02:32:07.856723 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-03-28 02:32:07.856734 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-03-28 02:32:07.856745 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-03-28 02:32:10.395772 | orchestrator | 2026-03-28 02:32:10 | INFO  | Task 046b60ea-b11a-4f29-a750-d5769ec9325c (common) was prepared for execution. 2026-03-28 02:32:10.395902 | orchestrator | 2026-03-28 02:32:10 | INFO  | It takes a moment until task 046b60ea-b11a-4f29-a750-d5769ec9325c (common) has been started and output is visible here. 2026-03-28 02:32:23.580257 | orchestrator | 2026-03-28 02:32:23.580362 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 02:32:23.580373 | orchestrator | 2026-03-28 02:32:23.580380 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 02:32:23.580388 | orchestrator | Saturday 28 March 2026 02:32:14 +0000 (0:00:00.309) 0:00:00.309 ******** 2026-03-28 02:32:23.580396 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:32:23.580404 | orchestrator | 2026-03-28 02:32:23.580411 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 02:32:23.580418 | orchestrator | Saturday 28 March 2026 02:32:16 +0000 (0:00:01.437) 0:00:01.746 ******** 2026-03-28 02:32:23.580424 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580431 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580438 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580444 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580451 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580457 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580464 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580470 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580493 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 02:32:23.580501 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580507 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580514 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580521 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580527 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580534 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580540 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 02:32:23.580547 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580570 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580578 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580583 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580590 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 02:32:23.580596 | orchestrator | 2026-03-28 02:32:23.580602 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 02:32:23.580607 | orchestrator | Saturday 28 March 2026 02:32:19 +0000 (0:00:02.782) 0:00:04.529 ******** 2026-03-28 02:32:23.580613 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:32:23.580621 | orchestrator | 2026-03-28 02:32:23.580626 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 02:32:23.580636 | orchestrator | Saturday 28 March 2026 02:32:20 +0000 (0:00:01.425) 0:00:05.954 ******** 2026-03-28 02:32:23.580645 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:23.580724 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:23.580731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:23.580743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.471990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472102 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:24.472193 | orchestrator | 2026-03-28 02:32:24.472198 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 02:32:24.472204 | orchestrator | Saturday 28 March 2026 02:32:24 +0000 (0:00:03.520) 0:00:09.475 ******** 2026-03-28 02:32:24.472210 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:24.472215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:24.472219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:24.472223 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:32:24.472229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:24.472240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074205 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:32:25.074247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:25.074253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074262 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:32:25.074267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:25.074274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074282 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:32:25.074297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:25.074305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074313 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:32:25.074317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:25.074321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:25.074329 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:32:25.074334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:25.074341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013525 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:32:26.013540 | orchestrator | 2026-03-28 02:32:26.013548 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 02:32:26.013555 | orchestrator | Saturday 28 March 2026 02:32:25 +0000 (0:00:00.959) 0:00:10.434 ******** 2026-03-28 02:32:26.013563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:26.013572 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013579 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013585 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:32:26.013606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:26.013616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013648 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:32:26.013675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:26.013681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013693 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:32:26.013698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:26.013704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:26.013724 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:32:26.013730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:26.013748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182360 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:32:31.182369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:31.182377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182387 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:32:31.182391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 02:32:31.182412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:31.182421 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:32:31.182425 | orchestrator | 2026-03-28 02:32:31.182430 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 02:32:31.182435 | orchestrator | Saturday 28 March 2026 02:32:26 +0000 (0:00:01.895) 0:00:12.330 ******** 2026-03-28 02:32:31.182439 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:32:31.182443 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:32:31.182447 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:32:31.182450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:32:31.182464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:32:31.182468 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:32:31.182472 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:32:31.182476 | orchestrator | 2026-03-28 02:32:31.182480 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 02:32:31.182483 | orchestrator | Saturday 28 March 2026 02:32:27 +0000 (0:00:00.772) 0:00:13.103 ******** 2026-03-28 02:32:31.182487 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:32:31.182491 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:32:31.182495 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:32:31.182498 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:32:31.182502 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:32:31.182506 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:32:31.182510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:32:31.182514 | orchestrator | 2026-03-28 02:32:31.182518 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 02:32:31.182522 | orchestrator | Saturday 28 March 2026 02:32:28 +0000 (0:00:00.887) 0:00:13.990 ******** 2026-03-28 02:32:31.182526 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:31.182578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:34.041586 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041749 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:34.041795 | orchestrator | 2026-03-28 02:32:34.041802 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 02:32:34.041808 | orchestrator | Saturday 28 March 2026 02:32:32 +0000 (0:00:03.455) 0:00:17.446 ******** 2026-03-28 02:32:34.041813 | orchestrator | [WARNING]: Skipped 2026-03-28 02:32:34.041820 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 02:32:34.041827 | orchestrator | to this access issue: 2026-03-28 02:32:34.041867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 02:32:34.041875 | orchestrator | directory 2026-03-28 02:32:34.041880 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:32:34.041887 | orchestrator | 2026-03-28 02:32:34.041892 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 02:32:34.041897 | orchestrator | Saturday 28 March 2026 02:32:33 +0000 (0:00:00.993) 0:00:18.440 ******** 2026-03-28 02:32:34.041902 | orchestrator | [WARNING]: Skipped 2026-03-28 02:32:34.041912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 02:32:43.720640 | orchestrator | to this access issue: 2026-03-28 02:32:43.720736 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 02:32:43.720748 | orchestrator | directory 2026-03-28 02:32:43.720757 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:32:43.720766 | orchestrator | 2026-03-28 02:32:43.720774 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 02:32:43.720783 | orchestrator | Saturday 28 March 2026 02:32:34 +0000 (0:00:01.241) 0:00:19.681 ******** 2026-03-28 02:32:43.720808 | orchestrator | [WARNING]: Skipped 2026-03-28 02:32:43.720817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 02:32:43.720824 | orchestrator | to this access issue: 2026-03-28 02:32:43.720894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 02:32:43.720902 | orchestrator | directory 2026-03-28 02:32:43.720909 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:32:43.720917 | orchestrator | 2026-03-28 02:32:43.720924 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 02:32:43.720932 | orchestrator | Saturday 28 March 2026 02:32:35 +0000 (0:00:00.853) 0:00:20.535 ******** 2026-03-28 02:32:43.720939 | orchestrator | [WARNING]: Skipped 2026-03-28 02:32:43.720946 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 02:32:43.720954 | orchestrator | to this access issue: 2026-03-28 02:32:43.720961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 02:32:43.720968 | orchestrator | directory 2026-03-28 02:32:43.720976 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 02:32:43.720983 | orchestrator | 2026-03-28 02:32:43.720990 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 02:32:43.720998 | orchestrator | Saturday 28 March 2026 02:32:36 +0000 (0:00:00.840) 0:00:21.376 ******** 2026-03-28 02:32:43.721005 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:43.721012 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:32:43.721019 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:32:43.721027 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:32:43.721034 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:32:43.721041 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:32:43.721062 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:32:43.721070 | orchestrator | 2026-03-28 02:32:43.721078 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 02:32:43.721087 | orchestrator | Saturday 28 March 2026 02:32:38 +0000 (0:00:02.545) 0:00:23.922 ******** 2026-03-28 02:32:43.721095 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721121 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721132 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721145 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721158 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 02:32:43.721171 | orchestrator | 2026-03-28 02:32:43.721190 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 02:32:43.721203 | orchestrator | Saturday 28 March 2026 02:32:40 +0000 (0:00:02.101) 0:00:26.023 ******** 2026-03-28 02:32:43.721215 | orchestrator | changed: [testbed-manager] 2026-03-28 02:32:43.721226 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:32:43.721238 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:32:43.721249 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:32:43.721261 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:32:43.721273 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:32:43.721286 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:32:43.721298 | orchestrator | 2026-03-28 02:32:43.721311 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 02:32:43.721389 | orchestrator | Saturday 28 March 2026 02:32:42 +0000 (0:00:01.943) 0:00:27.967 ******** 2026-03-28 02:32:43.721410 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:43.721446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:43.721460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:43.721472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:43.721485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:43.721497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:43.721518 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:43.721540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:43.721562 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:43.721587 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.025936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:50.026113 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026137 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.026171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:50.026210 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026224 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026238 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.026272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:32:50.026287 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026301 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.026330 | orchestrator | 2026-03-28 02:32:50.026345 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 02:32:50.026360 | orchestrator | Saturday 28 March 2026 02:32:44 +0000 (0:00:01.590) 0:00:29.557 ******** 2026-03-28 02:32:50.026372 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026431 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026444 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026457 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 02:32:50.026469 | orchestrator | 2026-03-28 02:32:50.026482 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 02:32:50.026495 | orchestrator | Saturday 28 March 2026 02:32:46 +0000 (0:00:01.978) 0:00:31.535 ******** 2026-03-28 02:32:50.026508 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026535 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026560 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026574 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026589 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026602 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 02:32:50.026616 | orchestrator | 2026-03-28 02:32:50.026630 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-28 02:32:50.026644 | orchestrator | Saturday 28 March 2026 02:32:47 +0000 (0:00:01.795) 0:00:33.331 ******** 2026-03-28 02:32:50.026660 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.026696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 02:32:50.552557 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552638 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:32:50.552651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:34:20.358815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:34:20.358952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:34:20.358971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:34:20.358998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:34:20.359011 | orchestrator | 2026-03-28 02:34:20.359024 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-28 02:34:20.359037 | orchestrator | Saturday 28 March 2026 02:32:50 +0000 (0:00:02.579) 0:00:35.910 ******** 2026-03-28 02:34:20.359048 | orchestrator | changed: [testbed-manager] 2026-03-28 02:34:20.359060 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:20.359071 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:20.359082 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:20.359093 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:34:20.359104 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:34:20.359115 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:34:20.359125 | orchestrator | 2026-03-28 02:34:20.359137 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-28 02:34:20.359147 | orchestrator | Saturday 28 March 2026 02:32:52 +0000 (0:00:01.482) 0:00:37.393 ******** 2026-03-28 02:34:20.359158 | orchestrator | changed: [testbed-manager] 2026-03-28 02:34:20.359169 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:20.359179 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:20.359190 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:20.359200 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:34:20.359211 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:34:20.359222 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:34:20.359232 | orchestrator | 2026-03-28 02:34:20.359243 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359254 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:01.056) 0:00:38.450 ******** 2026-03-28 02:34:20.359265 | orchestrator | 2026-03-28 02:34:20.359276 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359287 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.068) 0:00:38.518 ******** 2026-03-28 02:34:20.359298 | orchestrator | 2026-03-28 02:34:20.359309 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359321 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.063) 0:00:38.582 ******** 2026-03-28 02:34:20.359333 | orchestrator | 2026-03-28 02:34:20.359346 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359358 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.063) 0:00:38.645 ******** 2026-03-28 02:34:20.359370 | orchestrator | 2026-03-28 02:34:20.359383 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359404 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.249) 0:00:38.894 ******** 2026-03-28 02:34:20.359416 | orchestrator | 2026-03-28 02:34:20.359428 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359441 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.059) 0:00:38.954 ******** 2026-03-28 02:34:20.359453 | orchestrator | 2026-03-28 02:34:20.359466 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 02:34:20.359478 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.064) 0:00:39.019 ******** 2026-03-28 02:34:20.359491 | orchestrator | 2026-03-28 02:34:20.359504 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 02:34:20.359517 | orchestrator | Saturday 28 March 2026 02:32:53 +0000 (0:00:00.086) 0:00:39.105 ******** 2026-03-28 02:34:20.359529 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:20.359543 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:20.359556 | orchestrator | changed: [testbed-manager] 2026-03-28 02:34:20.359569 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:34:20.359582 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:34:20.359610 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:20.359624 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:34:20.359636 | orchestrator | 2026-03-28 02:34:20.359648 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-28 02:34:20.359661 | orchestrator | Saturday 28 March 2026 02:33:35 +0000 (0:00:41.343) 0:01:20.449 ******** 2026-03-28 02:34:20.359673 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:20.359684 | orchestrator | changed: [testbed-manager] 2026-03-28 02:34:20.359695 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:20.359706 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:34:20.359717 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:34:20.359727 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:34:20.359738 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:20.359749 | orchestrator | 2026-03-28 02:34:20.359760 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-28 02:34:20.359771 | orchestrator | Saturday 28 March 2026 02:34:09 +0000 (0:00:34.562) 0:01:55.012 ******** 2026-03-28 02:34:20.359800 | orchestrator | ok: [testbed-manager] 2026-03-28 02:34:20.359813 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:34:20.359824 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:34:20.359834 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:34:20.359845 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:34:20.359856 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:34:20.359867 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:34:20.359878 | orchestrator | 2026-03-28 02:34:20.359889 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-28 02:34:20.359900 | orchestrator | Saturday 28 March 2026 02:34:11 +0000 (0:00:01.946) 0:01:56.958 ******** 2026-03-28 02:34:20.359911 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:20.359922 | orchestrator | changed: [testbed-manager] 2026-03-28 02:34:20.359933 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:20.359944 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:34:20.359955 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:34:20.359966 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:20.359977 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:34:20.359988 | orchestrator | 2026-03-28 02:34:20.359999 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:34:20.360011 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360024 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360053 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360073 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360084 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360095 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360106 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 02:34:20.360117 | orchestrator | 2026-03-28 02:34:20.360128 | orchestrator | 2026-03-28 02:34:20.360139 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:34:20.360150 | orchestrator | Saturday 28 March 2026 02:34:20 +0000 (0:00:08.726) 0:02:05.685 ******** 2026-03-28 02:34:20.360161 | orchestrator | =============================================================================== 2026-03-28 02:34:20.360172 | orchestrator | common : Restart fluentd container ------------------------------------- 41.34s 2026-03-28 02:34:20.360183 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.56s 2026-03-28 02:34:20.360194 | orchestrator | common : Restart cron container ----------------------------------------- 8.73s 2026-03-28 02:34:20.360205 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.52s 2026-03-28 02:34:20.360216 | orchestrator | common : Copying over config.json files for services -------------------- 3.46s 2026-03-28 02:34:20.360227 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.78s 2026-03-28 02:34:20.360237 | orchestrator | common : Check common containers ---------------------------------------- 2.58s 2026-03-28 02:34:20.360248 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.55s 2026-03-28 02:34:20.360259 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.10s 2026-03-28 02:34:20.360270 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.98s 2026-03-28 02:34:20.360281 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.95s 2026-03-28 02:34:20.360292 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.94s 2026-03-28 02:34:20.360302 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.90s 2026-03-28 02:34:20.360313 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.80s 2026-03-28 02:34:20.360324 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.59s 2026-03-28 02:34:20.360335 | orchestrator | common : Creating log volume -------------------------------------------- 1.48s 2026-03-28 02:34:20.360353 | orchestrator | common : include_tasks -------------------------------------------------- 1.44s 2026-03-28 02:34:20.785486 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2026-03-28 02:34:20.785562 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.24s 2026-03-28 02:34:20.785582 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.06s 2026-03-28 02:34:23.163923 | orchestrator | 2026-03-28 02:34:23 | INFO  | Task eb1e1c57-dc5f-4b4b-a9d2-2de37863a731 (loadbalancer) was prepared for execution. 2026-03-28 02:34:23.164011 | orchestrator | 2026-03-28 02:34:23 | INFO  | It takes a moment until task eb1e1c57-dc5f-4b4b-a9d2-2de37863a731 (loadbalancer) has been started and output is visible here. 2026-03-28 02:34:38.351512 | orchestrator | 2026-03-28 02:34:38.351648 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:34:38.351675 | orchestrator | 2026-03-28 02:34:38.351689 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:34:38.351701 | orchestrator | Saturday 28 March 2026 02:34:27 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-28 02:34:38.351742 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:34:38.351756 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:34:38.351767 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:34:38.351847 | orchestrator | 2026-03-28 02:34:38.351880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:34:38.351898 | orchestrator | Saturday 28 March 2026 02:34:27 +0000 (0:00:00.296) 0:00:00.545 ******** 2026-03-28 02:34:38.351917 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-28 02:34:38.351934 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-28 02:34:38.351952 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-28 02:34:38.351968 | orchestrator | 2026-03-28 02:34:38.351986 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-28 02:34:38.352003 | orchestrator | 2026-03-28 02:34:38.352021 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 02:34:38.352040 | orchestrator | Saturday 28 March 2026 02:34:28 +0000 (0:00:00.444) 0:00:00.989 ******** 2026-03-28 02:34:38.352080 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:34:38.352101 | orchestrator | 2026-03-28 02:34:38.352118 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-28 02:34:38.352138 | orchestrator | Saturday 28 March 2026 02:34:28 +0000 (0:00:00.594) 0:00:01.584 ******** 2026-03-28 02:34:38.352158 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:34:38.352178 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:34:38.352198 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:34:38.352217 | orchestrator | 2026-03-28 02:34:38.352234 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 02:34:38.352248 | orchestrator | Saturday 28 March 2026 02:34:29 +0000 (0:00:00.597) 0:00:02.181 ******** 2026-03-28 02:34:38.352261 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:34:38.352274 | orchestrator | 2026-03-28 02:34:38.352284 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-28 02:34:38.352295 | orchestrator | Saturday 28 March 2026 02:34:30 +0000 (0:00:00.702) 0:00:02.884 ******** 2026-03-28 02:34:38.352306 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:34:38.352316 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:34:38.352346 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:34:38.352357 | orchestrator | 2026-03-28 02:34:38.352378 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-28 02:34:38.352390 | orchestrator | Saturday 28 March 2026 02:34:30 +0000 (0:00:00.633) 0:00:03.518 ******** 2026-03-28 02:34:38.352401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352434 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352456 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 02:34:38.352467 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 02:34:38.352479 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 02:34:38.352489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 02:34:38.352500 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 02:34:38.352525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 02:34:38.352537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 02:34:38.352548 | orchestrator | 2026-03-28 02:34:38.352559 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 02:34:38.352570 | orchestrator | Saturday 28 March 2026 02:34:33 +0000 (0:00:03.229) 0:00:06.748 ******** 2026-03-28 02:34:38.352581 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 02:34:38.352593 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 02:34:38.352604 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 02:34:38.352615 | orchestrator | 2026-03-28 02:34:38.352626 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 02:34:38.352637 | orchestrator | Saturday 28 March 2026 02:34:34 +0000 (0:00:00.840) 0:00:07.588 ******** 2026-03-28 02:34:38.352648 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 02:34:38.352659 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 02:34:38.352670 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 02:34:38.352681 | orchestrator | 2026-03-28 02:34:38.352692 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 02:34:38.352703 | orchestrator | Saturday 28 March 2026 02:34:35 +0000 (0:00:01.232) 0:00:08.821 ******** 2026-03-28 02:34:38.352714 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-28 02:34:38.352725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:34:38.352759 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-28 02:34:38.352821 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:34:38.352833 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-28 02:34:38.352844 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:34:38.352855 | orchestrator | 2026-03-28 02:34:38.352866 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-28 02:34:38.352877 | orchestrator | Saturday 28 March 2026 02:34:36 +0000 (0:00:00.501) 0:00:09.323 ******** 2026-03-28 02:34:38.352891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:38.352918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:38.352931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:38.352951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:38.352963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:38.352983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:43.693555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:43.693674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:43.693692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:43.693706 | orchestrator | 2026-03-28 02:34:43.693720 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-28 02:34:43.693733 | orchestrator | Saturday 28 March 2026 02:34:38 +0000 (0:00:01.837) 0:00:11.160 ******** 2026-03-28 02:34:43.693744 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:43.693847 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:43.693868 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:43.693884 | orchestrator | 2026-03-28 02:34:43.693904 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-28 02:34:43.693922 | orchestrator | Saturday 28 March 2026 02:34:39 +0000 (0:00:00.923) 0:00:12.084 ******** 2026-03-28 02:34:43.693940 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-28 02:34:43.693958 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-28 02:34:43.693976 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-28 02:34:43.693994 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-28 02:34:43.694012 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-28 02:34:43.694112 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-28 02:34:43.694135 | orchestrator | 2026-03-28 02:34:43.694154 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-28 02:34:43.694174 | orchestrator | Saturday 28 March 2026 02:34:40 +0000 (0:00:01.484) 0:00:13.569 ******** 2026-03-28 02:34:43.694193 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:34:43.694210 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:34:43.694230 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:34:43.694250 | orchestrator | 2026-03-28 02:34:43.694269 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-28 02:34:43.694287 | orchestrator | Saturday 28 March 2026 02:34:41 +0000 (0:00:00.918) 0:00:14.488 ******** 2026-03-28 02:34:43.694304 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:34:43.694322 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:34:43.694342 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:34:43.694361 | orchestrator | 2026-03-28 02:34:43.694379 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-28 02:34:43.694395 | orchestrator | Saturday 28 March 2026 02:34:43 +0000 (0:00:01.410) 0:00:15.898 ******** 2026-03-28 02:34:43.694408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:34:43.694443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:34:43.694456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:43.694469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:43.694495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:34:43.694507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:34:43.694558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:34:43.694572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:43.694584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:43.694595 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:34:43.694616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:34:46.546700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:34:46.546885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:46.546910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:46.546932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:34:46.546954 | orchestrator | 2026-03-28 02:34:46.546973 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-28 02:34:46.546992 | orchestrator | Saturday 28 March 2026 02:34:43 +0000 (0:00:00.602) 0:00:16.500 ******** 2026-03-28 02:34:46.547013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:46.547034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:46.547054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:46.547133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:46.547160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:46.547182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:46.547200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:46.547222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:46.547246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:46.547303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:54.955760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:34:54.955936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d', '__omit_place_holder__41db29902689822375862e38c9b2019a7e229d3d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 02:34:54.955955 | orchestrator | 2026-03-28 02:34:54.955968 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-28 02:34:54.955982 | orchestrator | Saturday 28 March 2026 02:34:46 +0000 (0:00:02.856) 0:00:19.357 ******** 2026-03-28 02:34:54.955993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:34:54.956118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:54.956130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:54.956141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:34:54.956152 | orchestrator | 2026-03-28 02:34:54.956163 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-28 02:34:54.956174 | orchestrator | Saturday 28 March 2026 02:34:49 +0000 (0:00:03.271) 0:00:22.628 ******** 2026-03-28 02:34:54.956195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 02:34:54.956207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 02:34:54.956217 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 02:34:54.956228 | orchestrator | 2026-03-28 02:34:54.956240 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-28 02:34:54.956253 | orchestrator | Saturday 28 March 2026 02:34:51 +0000 (0:00:01.872) 0:00:24.500 ******** 2026-03-28 02:34:54.956265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 02:34:54.956277 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 02:34:54.956289 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 02:34:54.956301 | orchestrator | 2026-03-28 02:34:54.956314 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-28 02:34:54.956326 | orchestrator | Saturday 28 March 2026 02:34:54 +0000 (0:00:02.721) 0:00:27.221 ******** 2026-03-28 02:34:54.956339 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:34:54.956352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:34:54.956365 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:34:54.956378 | orchestrator | 2026-03-28 02:34:54.956401 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-28 02:35:06.399441 | orchestrator | Saturday 28 March 2026 02:34:54 +0000 (0:00:00.552) 0:00:27.774 ******** 2026-03-28 02:35:06.399552 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 02:35:06.399585 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 02:35:06.399600 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 02:35:06.399616 | orchestrator | 2026-03-28 02:35:06.399630 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-28 02:35:06.399645 | orchestrator | Saturday 28 March 2026 02:34:57 +0000 (0:00:02.059) 0:00:29.834 ******** 2026-03-28 02:35:06.399658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 02:35:06.399672 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 02:35:06.399686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 02:35:06.399699 | orchestrator | 2026-03-28 02:35:06.399712 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-28 02:35:06.399725 | orchestrator | Saturday 28 March 2026 02:34:59 +0000 (0:00:02.069) 0:00:31.904 ******** 2026-03-28 02:35:06.399739 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-28 02:35:06.399752 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-28 02:35:06.399817 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-28 02:35:06.399834 | orchestrator | 2026-03-28 02:35:06.399864 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-28 02:35:06.399877 | orchestrator | Saturday 28 March 2026 02:35:00 +0000 (0:00:01.469) 0:00:33.373 ******** 2026-03-28 02:35:06.399891 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-28 02:35:06.399904 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-28 02:35:06.399912 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-28 02:35:06.399920 | orchestrator | 2026-03-28 02:35:06.399947 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 02:35:06.399956 | orchestrator | Saturday 28 March 2026 02:35:01 +0000 (0:00:01.420) 0:00:34.794 ******** 2026-03-28 02:35:06.399966 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:06.399976 | orchestrator | 2026-03-28 02:35:06.399984 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-28 02:35:06.399991 | orchestrator | Saturday 28 March 2026 02:35:02 +0000 (0:00:00.523) 0:00:35.317 ******** 2026-03-28 02:35:06.400002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:06.400097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:06.400105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:06.400113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:06.400121 | orchestrator | 2026-03-28 02:35:06.400130 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-28 02:35:06.400138 | orchestrator | Saturday 28 March 2026 02:35:05 +0000 (0:00:03.327) 0:00:38.645 ******** 2026-03-28 02:35:06.400156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:07.183948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:07.184081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:07.184143 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:07.184170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:07.184191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:07.184211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:07.184230 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:07.184250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:07.184316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:07.184339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:07.184374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:07.184394 | orchestrator | 2026-03-28 02:35:07.184415 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-28 02:35:07.184437 | orchestrator | Saturday 28 March 2026 02:35:06 +0000 (0:00:00.568) 0:00:39.213 ******** 2026-03-28 02:35:07.184458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:07.184480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:07.184501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:07.184546 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:07.184582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:07.184625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:08.033160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:08.033292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:08.033312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:08.033326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:08.033338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:08.033350 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:08.033362 | orchestrator | 2026-03-28 02:35:08.033374 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 02:35:08.033387 | orchestrator | Saturday 28 March 2026 02:35:07 +0000 (0:00:00.784) 0:00:39.998 ******** 2026-03-28 02:35:08.033399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:08.033411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:08.033444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:08.033463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:08.033475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:08.033487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:08.033499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:08.033510 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:08.033521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:08.033550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:08.033603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:08.033632 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:09.238236 | orchestrator | 2026-03-28 02:35:09.238324 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 02:35:09.238340 | orchestrator | Saturday 28 March 2026 02:35:08 +0000 (0:00:00.844) 0:00:40.842 ******** 2026-03-28 02:35:09.238355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:09.238370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:09.238383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:09.238396 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:09.238409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:09.238421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:09.238454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:09.238485 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:09.238514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:09.238528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:09.238540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:09.238551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:09.238562 | orchestrator | 2026-03-28 02:35:09.238574 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 02:35:09.238585 | orchestrator | Saturday 28 March 2026 02:35:08 +0000 (0:00:00.526) 0:00:41.369 ******** 2026-03-28 02:35:09.238597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:09.238609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:09.238633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:09.238645 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:09.238669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:10.105142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:10.105223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:10.105238 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:10.105249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:10.105259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:10.105269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:10.105296 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:10.105306 | orchestrator | 2026-03-28 02:35:10.105316 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-28 02:35:10.105326 | orchestrator | Saturday 28 March 2026 02:35:09 +0000 (0:00:00.686) 0:00:42.056 ******** 2026-03-28 02:35:10.105355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:10.105392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:10.105409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:10.105425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:10.105441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:10.105458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:10.105485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:10.105495 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:10.105509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:10.105524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:11.435637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:11.435832 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:11.435853 | orchestrator | 2026-03-28 02:35:11.436638 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-28 02:35:11.436658 | orchestrator | Saturday 28 March 2026 02:35:10 +0000 (0:00:00.861) 0:00:42.917 ******** 2026-03-28 02:35:11.436674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:11.436688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:11.436722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:11.436735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:11.436747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:11.436801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:11.436835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:11.436847 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:11.436859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:11.436871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:11.436890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:11.436902 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:11.436913 | orchestrator | 2026-03-28 02:35:11.436924 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-28 02:35:11.436935 | orchestrator | Saturday 28 March 2026 02:35:10 +0000 (0:00:00.502) 0:00:43.420 ******** 2026-03-28 02:35:11.436947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 02:35:11.436959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:11.436986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:18.018333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:18.018462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 02:35:18.018477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:18.018513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:18.018523 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:18.018532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 02:35:18.018540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 02:35:18.018566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 02:35:18.018573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:18.018581 | orchestrator | 2026-03-28 02:35:18.018590 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-28 02:35:18.018599 | orchestrator | Saturday 28 March 2026 02:35:11 +0000 (0:00:00.828) 0:00:44.249 ******** 2026-03-28 02:35:18.018607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 02:35:18.018632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 02:35:18.018640 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 02:35:18.018647 | orchestrator | 2026-03-28 02:35:18.018654 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-28 02:35:18.018662 | orchestrator | Saturday 28 March 2026 02:35:13 +0000 (0:00:01.668) 0:00:45.917 ******** 2026-03-28 02:35:18.018670 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 02:35:18.018678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 02:35:18.018685 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 02:35:18.018692 | orchestrator | 2026-03-28 02:35:18.018706 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-28 02:35:18.018713 | orchestrator | Saturday 28 March 2026 02:35:14 +0000 (0:00:01.734) 0:00:47.652 ******** 2026-03-28 02:35:18.018720 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 02:35:18.018728 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 02:35:18.018735 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 02:35:18.018743 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 02:35:18.018750 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:18.018757 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 02:35:18.018787 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:18.018796 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 02:35:18.018803 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:18.018810 | orchestrator | 2026-03-28 02:35:18.018817 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-28 02:35:18.018825 | orchestrator | Saturday 28 March 2026 02:35:15 +0000 (0:00:00.765) 0:00:48.418 ******** 2026-03-28 02:35:18.018832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:18.018842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:18.018856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 02:35:18.018873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:22.038262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:22.038391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 02:35:22.038411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:22.038424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:22.038436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 02:35:22.038448 | orchestrator | 2026-03-28 02:35:22.038472 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-28 02:35:22.038503 | orchestrator | Saturday 28 March 2026 02:35:18 +0000 (0:00:02.414) 0:00:50.832 ******** 2026-03-28 02:35:22.038516 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:22.038527 | orchestrator | 2026-03-28 02:35:22.038539 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-28 02:35:22.038550 | orchestrator | Saturday 28 March 2026 02:35:18 +0000 (0:00:00.806) 0:00:51.639 ******** 2026-03-28 02:35:22.038582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 02:35:22.038618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:22.038631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.038643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.038655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 02:35:22.038672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 02:35:22.038699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:22.671267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:22.671376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671449 | orchestrator | 2026-03-28 02:35:22.671462 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-28 02:35:22.671475 | orchestrator | Saturday 28 March 2026 02:35:22 +0000 (0:00:03.213) 0:00:54.852 ******** 2026-03-28 02:35:22.671489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 02:35:22.671542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:22.671556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671580 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:22.671594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 02:35:22.671611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:22.671631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:22.671652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.113897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:31.114075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 02:35:31.114101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 02:35:31.114115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.114127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.114163 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:31.114176 | orchestrator | 2026-03-28 02:35:31.114190 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-28 02:35:31.114211 | orchestrator | Saturday 28 March 2026 02:35:22 +0000 (0:00:00.637) 0:00:55.489 ******** 2026-03-28 02:35:31.114231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114273 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:31.114311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114353 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:31.114370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-28 02:35:31.114416 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:31.114429 | orchestrator | 2026-03-28 02:35:31.114442 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-28 02:35:31.114454 | orchestrator | Saturday 28 March 2026 02:35:23 +0000 (0:00:01.114) 0:00:56.603 ******** 2026-03-28 02:35:31.114466 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:31.114479 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:31.114491 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:31.114504 | orchestrator | 2026-03-28 02:35:31.114517 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-28 02:35:31.114530 | orchestrator | Saturday 28 March 2026 02:35:25 +0000 (0:00:01.274) 0:00:57.877 ******** 2026-03-28 02:35:31.114540 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:31.114551 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:31.114562 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:31.114573 | orchestrator | 2026-03-28 02:35:31.114583 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-28 02:35:31.114594 | orchestrator | Saturday 28 March 2026 02:35:27 +0000 (0:00:02.052) 0:00:59.930 ******** 2026-03-28 02:35:31.114605 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:31.114616 | orchestrator | 2026-03-28 02:35:31.114627 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-28 02:35:31.114638 | orchestrator | Saturday 28 March 2026 02:35:27 +0000 (0:00:00.596) 0:01:00.527 ******** 2026-03-28 02:35:31.114651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:31.114678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.114697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.114717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:31.791390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:31.791624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791667 | orchestrator | 2026-03-28 02:35:31.791682 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-28 02:35:31.791694 | orchestrator | Saturday 28 March 2026 02:35:31 +0000 (0:00:03.399) 0:01:03.926 ******** 2026-03-28 02:35:31.791728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 02:35:31.791741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791824 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:31.791844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 02:35:31.791856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:31.791882 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:31.791904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 02:35:41.252208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 02:35:41.252297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:35:41.252308 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:41.252318 | orchestrator | 2026-03-28 02:35:41.252326 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-28 02:35:41.252334 | orchestrator | Saturday 28 March 2026 02:35:31 +0000 (0:00:00.674) 0:01:04.601 ******** 2026-03-28 02:35:41.252357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252375 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:41.252382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252401 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:41.252413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-28 02:35:41.252436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:41.252447 | orchestrator | 2026-03-28 02:35:41.252457 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-28 02:35:41.252468 | orchestrator | Saturday 28 March 2026 02:35:32 +0000 (0:00:00.883) 0:01:05.484 ******** 2026-03-28 02:35:41.252480 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:41.252493 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:41.252505 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:41.252518 | orchestrator | 2026-03-28 02:35:41.252529 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-28 02:35:41.252540 | orchestrator | Saturday 28 March 2026 02:35:34 +0000 (0:00:01.557) 0:01:07.041 ******** 2026-03-28 02:35:41.252566 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:41.252573 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:41.252580 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:41.252586 | orchestrator | 2026-03-28 02:35:41.252593 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-28 02:35:41.252600 | orchestrator | Saturday 28 March 2026 02:35:36 +0000 (0:00:01.984) 0:01:09.025 ******** 2026-03-28 02:35:41.252606 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:41.252613 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:41.252619 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:41.252626 | orchestrator | 2026-03-28 02:35:41.252632 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-28 02:35:41.252639 | orchestrator | Saturday 28 March 2026 02:35:36 +0000 (0:00:00.331) 0:01:09.357 ******** 2026-03-28 02:35:41.252646 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:41.252652 | orchestrator | 2026-03-28 02:35:41.252659 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-28 02:35:41.252679 | orchestrator | Saturday 28 March 2026 02:35:37 +0000 (0:00:00.659) 0:01:10.017 ******** 2026-03-28 02:35:41.252689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 02:35:41.252703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 02:35:41.252710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 02:35:41.252717 | orchestrator | 2026-03-28 02:35:41.252724 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-28 02:35:41.252732 | orchestrator | Saturday 28 March 2026 02:35:39 +0000 (0:00:02.648) 0:01:12.665 ******** 2026-03-28 02:35:41.252744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 02:35:41.252798 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:41.252814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 02:35:49.120653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:49.120843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 02:35:49.120869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:49.120985 | orchestrator | 2026-03-28 02:35:49.121001 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-28 02:35:49.121014 | orchestrator | Saturday 28 March 2026 02:35:41 +0000 (0:00:01.403) 0:01:14.069 ******** 2026-03-28 02:35:49.121046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121074 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:49.121113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121140 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:49.121153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 02:35:49.121178 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:49.121191 | orchestrator | 2026-03-28 02:35:49.121204 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-28 02:35:49.121217 | orchestrator | Saturday 28 March 2026 02:35:42 +0000 (0:00:01.690) 0:01:15.760 ******** 2026-03-28 02:35:49.121230 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:49.121242 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:49.121255 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:49.121268 | orchestrator | 2026-03-28 02:35:49.121286 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-28 02:35:49.121317 | orchestrator | Saturday 28 March 2026 02:35:43 +0000 (0:00:00.429) 0:01:16.189 ******** 2026-03-28 02:35:49.121331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:49.121343 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:49.121355 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:49.121368 | orchestrator | 2026-03-28 02:35:49.121406 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-28 02:35:49.121419 | orchestrator | Saturday 28 March 2026 02:35:44 +0000 (0:00:01.308) 0:01:17.497 ******** 2026-03-28 02:35:49.121431 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:49.121444 | orchestrator | 2026-03-28 02:35:49.121457 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-28 02:35:49.121477 | orchestrator | Saturday 28 March 2026 02:35:45 +0000 (0:00:00.927) 0:01:18.424 ******** 2026-03-28 02:35:49.121535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:49.121573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.121594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.121618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.121651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:49.805498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 02:35:49.805696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805813 | orchestrator | 2026-03-28 02:35:49.805835 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-28 02:35:49.805848 | orchestrator | Saturday 28 March 2026 02:35:49 +0000 (0:00:03.600) 0:01:22.024 ******** 2026-03-28 02:35:49.805861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 02:35:49.805873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:49.805909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:49.805933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 02:35:56.120081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120350 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:56.120367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 02:35:56.120383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 02:35:56.120484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:56.120496 | orchestrator | 2026-03-28 02:35:56.120510 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-28 02:35:56.120524 | orchestrator | Saturday 28 March 2026 02:35:49 +0000 (0:00:00.717) 0:01:22.742 ******** 2026-03-28 02:35:56.120538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120566 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:56.120578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120604 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:56.120617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-28 02:35:56.120639 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:56.120650 | orchestrator | 2026-03-28 02:35:56.120661 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-28 02:35:56.120672 | orchestrator | Saturday 28 March 2026 02:35:51 +0000 (0:00:01.134) 0:01:23.876 ******** 2026-03-28 02:35:56.120683 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:56.120702 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:56.120713 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:56.120724 | orchestrator | 2026-03-28 02:35:56.120735 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-28 02:35:56.120746 | orchestrator | Saturday 28 March 2026 02:35:52 +0000 (0:00:01.315) 0:01:25.192 ******** 2026-03-28 02:35:56.120787 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:35:56.120808 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:35:56.120827 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:35:56.120842 | orchestrator | 2026-03-28 02:35:56.120853 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-28 02:35:56.120864 | orchestrator | Saturday 28 March 2026 02:35:54 +0000 (0:00:02.048) 0:01:27.240 ******** 2026-03-28 02:35:56.120875 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:56.120886 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:56.120896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:56.120907 | orchestrator | 2026-03-28 02:35:56.120918 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-28 02:35:56.120929 | orchestrator | Saturday 28 March 2026 02:35:54 +0000 (0:00:00.352) 0:01:27.593 ******** 2026-03-28 02:35:56.120940 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:35:56.120951 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:35:56.120961 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:35:56.120972 | orchestrator | 2026-03-28 02:35:56.120983 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-28 02:35:56.120994 | orchestrator | Saturday 28 March 2026 02:35:55 +0000 (0:00:00.376) 0:01:27.969 ******** 2026-03-28 02:35:56.121004 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:35:56.121015 | orchestrator | 2026-03-28 02:35:56.121026 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-28 02:35:56.121037 | orchestrator | Saturday 28 March 2026 02:35:56 +0000 (0:00:00.965) 0:01:28.934 ******** 2026-03-28 02:35:59.335297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 02:35:59.335405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 02:35:59.335422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:35:59.335458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:35:59.335473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:35:59.335581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 02:35:59.335601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:36:00.231545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.231656 | orchestrator | 2026-03-28 02:36:00.231673 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-28 02:36:00.231689 | orchestrator | Saturday 28 March 2026 02:35:59 +0000 (0:00:03.446) 0:01:32.381 ******** 2026-03-28 02:36:00.231704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 02:36:00.231719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:36:00.231734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.232090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.605494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.605597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.605638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.605652 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:00.605669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 02:36:00.605681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:36:00.606385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.606436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.606449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.606474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.606491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:00.606503 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:00.606516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 02:36:00.606528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 02:36:00.606548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 02:36:10.335223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 02:36:10.335344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 02:36:10.335378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:36:10.335392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 02:36:10.335404 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:10.335419 | orchestrator | 2026-03-28 02:36:10.335431 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-28 02:36:10.335444 | orchestrator | Saturday 28 March 2026 02:36:00 +0000 (0:00:01.039) 0:01:33.421 ******** 2026-03-28 02:36:10.335456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335482 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:10.335494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335516 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:10.335527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-28 02:36:10.335572 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:10.335583 | orchestrator | 2026-03-28 02:36:10.335594 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-28 02:36:10.335621 | orchestrator | Saturday 28 March 2026 02:36:01 +0000 (0:00:01.246) 0:01:34.668 ******** 2026-03-28 02:36:10.335633 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:10.335644 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:10.335655 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:10.335666 | orchestrator | 2026-03-28 02:36:10.335677 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-28 02:36:10.335688 | orchestrator | Saturday 28 March 2026 02:36:03 +0000 (0:00:01.263) 0:01:35.932 ******** 2026-03-28 02:36:10.335699 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:10.335710 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:10.335720 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:10.335731 | orchestrator | 2026-03-28 02:36:10.335742 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-28 02:36:10.335754 | orchestrator | Saturday 28 March 2026 02:36:05 +0000 (0:00:01.980) 0:01:37.912 ******** 2026-03-28 02:36:10.335795 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:10.335808 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:10.335821 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:10.335834 | orchestrator | 2026-03-28 02:36:10.335847 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-28 02:36:10.335860 | orchestrator | Saturday 28 March 2026 02:36:05 +0000 (0:00:00.303) 0:01:38.216 ******** 2026-03-28 02:36:10.335873 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:36:10.335885 | orchestrator | 2026-03-28 02:36:10.335898 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-28 02:36:10.335911 | orchestrator | Saturday 28 March 2026 02:36:06 +0000 (0:00:00.984) 0:01:39.200 ******** 2026-03-28 02:36:10.335934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 02:36:10.335958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:13.292800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 02:36:13.292909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:13.292974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 02:36:13.292990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:13.293012 | orchestrator | 2026-03-28 02:36:13.293026 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-28 02:36:13.293038 | orchestrator | Saturday 28 March 2026 02:36:10 +0000 (0:00:04.093) 0:01:43.293 ******** 2026-03-28 02:36:13.293065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 02:36:13.396454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:13.396579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:13.396599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 02:36:13.396647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:13.396669 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:13.396682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 02:36:13.396711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 02:36:25.032471 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:25.032603 | orchestrator | 2026-03-28 02:36:25.032629 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-28 02:36:25.032649 | orchestrator | Saturday 28 March 2026 02:36:13 +0000 (0:00:02.919) 0:01:46.213 ******** 2026-03-28 02:36:25.032672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032714 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:25.032733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032831 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:25.032850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 02:36:25.032909 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:25.032927 | orchestrator | 2026-03-28 02:36:25.032946 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-28 02:36:25.032964 | orchestrator | Saturday 28 March 2026 02:36:17 +0000 (0:00:03.656) 0:01:49.870 ******** 2026-03-28 02:36:25.033012 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:25.033033 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:25.033051 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:25.033070 | orchestrator | 2026-03-28 02:36:25.033092 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-28 02:36:25.033111 | orchestrator | Saturday 28 March 2026 02:36:18 +0000 (0:00:01.361) 0:01:51.231 ******** 2026-03-28 02:36:25.033130 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:25.033148 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:25.033168 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:25.033187 | orchestrator | 2026-03-28 02:36:25.033206 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-28 02:36:25.033307 | orchestrator | Saturday 28 March 2026 02:36:20 +0000 (0:00:02.031) 0:01:53.263 ******** 2026-03-28 02:36:25.033331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:25.033349 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:25.033367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:25.033385 | orchestrator | 2026-03-28 02:36:25.033404 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-28 02:36:25.033422 | orchestrator | Saturday 28 March 2026 02:36:20 +0000 (0:00:00.316) 0:01:53.579 ******** 2026-03-28 02:36:25.033439 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:36:25.033454 | orchestrator | 2026-03-28 02:36:25.033470 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-28 02:36:25.033487 | orchestrator | Saturday 28 March 2026 02:36:21 +0000 (0:00:01.086) 0:01:54.666 ******** 2026-03-28 02:36:25.033504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 02:36:25.033523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 02:36:25.033540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 02:36:25.033557 | orchestrator | 2026-03-28 02:36:25.033573 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-28 02:36:25.033605 | orchestrator | Saturday 28 March 2026 02:36:24 +0000 (0:00:02.964) 0:01:57.630 ******** 2026-03-28 02:36:25.033624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 02:36:25.033657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 02:36:34.049419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:34.049504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:34.049514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 02:36:34.049584 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:34.049595 | orchestrator | 2026-03-28 02:36:34.049603 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-28 02:36:34.049610 | orchestrator | Saturday 28 March 2026 02:36:25 +0000 (0:00:00.406) 0:01:58.037 ******** 2026-03-28 02:36:34.049617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:34.049640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:34.049659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-28 02:36:34.049686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:34.049692 | orchestrator | 2026-03-28 02:36:34.049699 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-28 02:36:34.049705 | orchestrator | Saturday 28 March 2026 02:36:26 +0000 (0:00:00.870) 0:01:58.907 ******** 2026-03-28 02:36:34.049711 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:34.049718 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:34.049724 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:34.049730 | orchestrator | 2026-03-28 02:36:34.049736 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-28 02:36:34.049743 | orchestrator | Saturday 28 March 2026 02:36:27 +0000 (0:00:01.323) 0:02:00.231 ******** 2026-03-28 02:36:34.049749 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:34.049820 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:34.049828 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:34.049834 | orchestrator | 2026-03-28 02:36:34.049840 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-28 02:36:34.049851 | orchestrator | Saturday 28 March 2026 02:36:29 +0000 (0:00:02.038) 0:02:02.270 ******** 2026-03-28 02:36:34.049857 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:34.049863 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:34.049870 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:34.049876 | orchestrator | 2026-03-28 02:36:34.049882 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-28 02:36:34.049889 | orchestrator | Saturday 28 March 2026 02:36:29 +0000 (0:00:00.338) 0:02:02.609 ******** 2026-03-28 02:36:34.049895 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:36:34.049901 | orchestrator | 2026-03-28 02:36:34.049907 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-28 02:36:34.049914 | orchestrator | Saturday 28 March 2026 02:36:30 +0000 (0:00:01.105) 0:02:03.714 ******** 2026-03-28 02:36:34.049940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 02:36:34.049960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 02:36:34.049975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 02:36:35.642538 | orchestrator | 2026-03-28 02:36:35.642666 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-28 02:36:35.642693 | orchestrator | Saturday 28 March 2026 02:36:34 +0000 (0:00:03.152) 0:02:06.866 ******** 2026-03-28 02:36:35.642744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 02:36:35.642843 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:35.642895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 02:36:35.642950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:35.642984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 02:36:35.643007 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:35.643027 | orchestrator | 2026-03-28 02:36:35.643047 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-28 02:36:35.643066 | orchestrator | Saturday 28 March 2026 02:36:34 +0000 (0:00:00.641) 0:02:07.508 ******** 2026-03-28 02:36:35.643088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:35.643124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:35.643144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:35.643169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:44.346628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:44.346741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 02:36:44.346819 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:44.346836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:44.346867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:44.346881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:44.346894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 02:36:44.346905 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:44.346916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:44.346928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:44.346939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-28 02:36:44.346973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 02:36:44.346985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 02:36:44.346996 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:44.347007 | orchestrator | 2026-03-28 02:36:44.347019 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-28 02:36:44.347032 | orchestrator | Saturday 28 March 2026 02:36:35 +0000 (0:00:00.949) 0:02:08.457 ******** 2026-03-28 02:36:44.347043 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:44.347054 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:44.347065 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:44.347075 | orchestrator | 2026-03-28 02:36:44.347086 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-28 02:36:44.347097 | orchestrator | Saturday 28 March 2026 02:36:37 +0000 (0:00:01.628) 0:02:10.086 ******** 2026-03-28 02:36:44.347109 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:44.347120 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:44.347131 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:44.347141 | orchestrator | 2026-03-28 02:36:44.347152 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-28 02:36:44.347166 | orchestrator | Saturday 28 March 2026 02:36:39 +0000 (0:00:02.024) 0:02:12.110 ******** 2026-03-28 02:36:44.347178 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:44.347190 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:44.347222 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:44.347236 | orchestrator | 2026-03-28 02:36:44.347248 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-28 02:36:44.347261 | orchestrator | Saturday 28 March 2026 02:36:39 +0000 (0:00:00.313) 0:02:12.424 ******** 2026-03-28 02:36:44.347273 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:44.347286 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:44.347298 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:44.347310 | orchestrator | 2026-03-28 02:36:44.347323 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-28 02:36:44.347335 | orchestrator | Saturday 28 March 2026 02:36:39 +0000 (0:00:00.292) 0:02:12.716 ******** 2026-03-28 02:36:44.347347 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:36:44.347360 | orchestrator | 2026-03-28 02:36:44.347372 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-28 02:36:44.347385 | orchestrator | Saturday 28 March 2026 02:36:41 +0000 (0:00:01.209) 0:02:13.926 ******** 2026-03-28 02:36:44.347409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 02:36:44.347436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:44.347450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:44.347465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 02:36:44.347488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:44.932458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:44.932575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 02:36:44.932626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:44.932645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:44.932661 | orchestrator | 2026-03-28 02:36:44.932678 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-28 02:36:44.932694 | orchestrator | Saturday 28 March 2026 02:36:44 +0000 (0:00:03.232) 0:02:17.159 ******** 2026-03-28 02:36:44.932728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 02:36:44.932854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:44.932875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:44.932902 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:44.932920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 02:36:44.932937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:44.932953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:44.932968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:44.933004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 02:36:54.061630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 02:36:54.061810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 02:36:54.061834 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:54.061849 | orchestrator | 2026-03-28 02:36:54.061862 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-28 02:36:54.061881 | orchestrator | Saturday 28 March 2026 02:36:44 +0000 (0:00:00.583) 0:02:17.742 ******** 2026-03-28 02:36:54.061903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.061965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.061987 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:54.062007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.062104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.062127 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:54.062147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.062170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-28 02:36:54.062190 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:54.062208 | orchestrator | 2026-03-28 02:36:54.062222 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-28 02:36:54.062236 | orchestrator | Saturday 28 March 2026 02:36:45 +0000 (0:00:01.059) 0:02:18.801 ******** 2026-03-28 02:36:54.062249 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:54.062261 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:54.062299 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:54.062312 | orchestrator | 2026-03-28 02:36:54.062325 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-28 02:36:54.062338 | orchestrator | Saturday 28 March 2026 02:36:47 +0000 (0:00:01.335) 0:02:20.137 ******** 2026-03-28 02:36:54.062350 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:54.062363 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:54.062375 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:54.062388 | orchestrator | 2026-03-28 02:36:54.062400 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-28 02:36:54.062413 | orchestrator | Saturday 28 March 2026 02:36:49 +0000 (0:00:02.016) 0:02:22.153 ******** 2026-03-28 02:36:54.062425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:54.062452 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:54.062465 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:54.062478 | orchestrator | 2026-03-28 02:36:54.062492 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-28 02:36:54.062526 | orchestrator | Saturday 28 March 2026 02:36:49 +0000 (0:00:00.324) 0:02:22.478 ******** 2026-03-28 02:36:54.062538 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:36:54.062549 | orchestrator | 2026-03-28 02:36:54.062560 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-28 02:36:54.062571 | orchestrator | Saturday 28 March 2026 02:36:50 +0000 (0:00:01.173) 0:02:23.651 ******** 2026-03-28 02:36:54.062584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 02:36:54.062601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:54.062614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 02:36:54.062635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:54.062656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 02:36:59.343568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:59.343698 | orchestrator | 2026-03-28 02:36:59.343726 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-28 02:36:59.343809 | orchestrator | Saturday 28 March 2026 02:36:54 +0000 (0:00:03.218) 0:02:26.870 ******** 2026-03-28 02:36:59.343829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 02:36:59.343927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:59.343970 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:59.343989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 02:36:59.344024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:59.344036 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:59.344048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 02:36:59.344060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:36:59.344079 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:59.344091 | orchestrator | 2026-03-28 02:36:59.344102 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-28 02:36:59.344114 | orchestrator | Saturday 28 March 2026 02:36:54 +0000 (0:00:00.642) 0:02:27.513 ******** 2026-03-28 02:36:59.344126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344152 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:36:59.344163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344186 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:36:59.344197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-28 02:36:59.344219 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:36:59.344230 | orchestrator | 2026-03-28 02:36:59.344246 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-28 02:36:59.344257 | orchestrator | Saturday 28 March 2026 02:36:55 +0000 (0:00:00.926) 0:02:28.439 ******** 2026-03-28 02:36:59.344268 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:59.344279 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:59.344290 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:59.344300 | orchestrator | 2026-03-28 02:36:59.344311 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-28 02:36:59.344322 | orchestrator | Saturday 28 March 2026 02:36:57 +0000 (0:00:01.599) 0:02:30.039 ******** 2026-03-28 02:36:59.344333 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:36:59.344344 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:36:59.344355 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:36:59.344365 | orchestrator | 2026-03-28 02:36:59.344376 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-28 02:36:59.344395 | orchestrator | Saturday 28 March 2026 02:36:59 +0000 (0:00:02.114) 0:02:32.153 ******** 2026-03-28 02:37:03.690480 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:03.690607 | orchestrator | 2026-03-28 02:37:03.690633 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-28 02:37:03.690651 | orchestrator | Saturday 28 March 2026 02:37:00 +0000 (0:00:01.082) 0:02:33.236 ******** 2026-03-28 02:37:03.690674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 02:37:03.690734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 02:37:03.690895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 02:37:03.690929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:03.690989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645403 | orchestrator | 2026-03-28 02:37:04.645524 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-28 02:37:04.645542 | orchestrator | Saturday 28 March 2026 02:37:03 +0000 (0:00:03.354) 0:02:36.591 ******** 2026-03-28 02:37:04.645581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 02:37:04.645597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645634 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:04.645663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 02:37:04.645695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645740 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:04.645788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 02:37:04.645801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 02:37:04.645840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 02:37:15.705338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:15.705445 | orchestrator | 2026-03-28 02:37:15.705461 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-28 02:37:15.705473 | orchestrator | Saturday 28 March 2026 02:37:04 +0000 (0:00:00.960) 0:02:37.552 ******** 2026-03-28 02:37:15.705484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705508 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:15.705519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705539 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:15.705554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-28 02:37:15.705587 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:15.705602 | orchestrator | 2026-03-28 02:37:15.705619 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-28 02:37:15.705637 | orchestrator | Saturday 28 March 2026 02:37:05 +0000 (0:00:00.913) 0:02:38.465 ******** 2026-03-28 02:37:15.705654 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:15.705671 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:15.705687 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:15.705705 | orchestrator | 2026-03-28 02:37:15.705715 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-28 02:37:15.705725 | orchestrator | Saturday 28 March 2026 02:37:06 +0000 (0:00:01.331) 0:02:39.797 ******** 2026-03-28 02:37:15.705734 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:15.705744 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:15.705786 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:15.705796 | orchestrator | 2026-03-28 02:37:15.705806 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-28 02:37:15.705816 | orchestrator | Saturday 28 March 2026 02:37:08 +0000 (0:00:01.973) 0:02:41.770 ******** 2026-03-28 02:37:15.705825 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:15.705835 | orchestrator | 2026-03-28 02:37:15.705845 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-28 02:37:15.705855 | orchestrator | Saturday 28 March 2026 02:37:10 +0000 (0:00:01.322) 0:02:43.093 ******** 2026-03-28 02:37:15.705865 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 02:37:15.705875 | orchestrator | 2026-03-28 02:37:15.705885 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-28 02:37:15.705913 | orchestrator | Saturday 28 March 2026 02:37:13 +0000 (0:00:03.029) 0:02:46.123 ******** 2026-03-28 02:37:15.705955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:15.705971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:15.705983 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:15.705999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:15.706087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:15.706103 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:15.706125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:18.240465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:18.240599 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:18.240627 | orchestrator | 2026-03-28 02:37:18.240650 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-28 02:37:18.240671 | orchestrator | Saturday 28 March 2026 02:37:15 +0000 (0:00:02.388) 0:02:48.511 ******** 2026-03-28 02:37:18.240804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:18.240834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:18.240855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:18.240905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:18.240954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:18.240976 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:18.240999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:37:18.241029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 02:37:27.954835 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:27.954966 | orchestrator | 2026-03-28 02:37:27.954987 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-28 02:37:27.955005 | orchestrator | Saturday 28 March 2026 02:37:18 +0000 (0:00:02.540) 0:02:51.051 ******** 2026-03-28 02:37:27.955023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955101 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:27.955117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955163 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:27.955189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 02:37:27.955219 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:27.955233 | orchestrator | 2026-03-28 02:37:27.955247 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-28 02:37:27.955261 | orchestrator | Saturday 28 March 2026 02:37:21 +0000 (0:00:02.783) 0:02:53.835 ******** 2026-03-28 02:37:27.955276 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:27.955320 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:27.955335 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:27.955348 | orchestrator | 2026-03-28 02:37:27.955362 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-28 02:37:27.955375 | orchestrator | Saturday 28 March 2026 02:37:23 +0000 (0:00:02.054) 0:02:55.889 ******** 2026-03-28 02:37:27.955390 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:27.955403 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:27.955417 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:27.955431 | orchestrator | 2026-03-28 02:37:27.955444 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-28 02:37:27.955459 | orchestrator | Saturday 28 March 2026 02:37:24 +0000 (0:00:01.470) 0:02:57.360 ******** 2026-03-28 02:37:27.955474 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:27.955488 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:27.955503 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:27.955520 | orchestrator | 2026-03-28 02:37:27.955536 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-28 02:37:27.955550 | orchestrator | Saturday 28 March 2026 02:37:24 +0000 (0:00:00.310) 0:02:57.671 ******** 2026-03-28 02:37:27.955565 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:27.955579 | orchestrator | 2026-03-28 02:37:27.955593 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-28 02:37:27.955608 | orchestrator | Saturday 28 March 2026 02:37:26 +0000 (0:00:01.374) 0:02:59.045 ******** 2026-03-28 02:37:27.955635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 02:37:27.955656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 02:37:27.955671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 02:37:27.955685 | orchestrator | 2026-03-28 02:37:27.955699 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-28 02:37:27.955724 | orchestrator | Saturday 28 March 2026 02:37:27 +0000 (0:00:01.536) 0:03:00.582 ******** 2026-03-28 02:37:27.955775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 02:37:36.409841 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:36.409950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 02:37:36.409963 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:36.409971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 02:37:36.409977 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:36.409983 | orchestrator | 2026-03-28 02:37:36.409990 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-28 02:37:36.409998 | orchestrator | Saturday 28 March 2026 02:37:28 +0000 (0:00:00.387) 0:03:00.969 ******** 2026-03-28 02:37:36.410005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 02:37:36.410113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 02:37:36.410123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:36.410131 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:36.410138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 02:37:36.410165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:36.410173 | orchestrator | 2026-03-28 02:37:36.410245 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-28 02:37:36.410256 | orchestrator | Saturday 28 March 2026 02:37:29 +0000 (0:00:00.899) 0:03:01.869 ******** 2026-03-28 02:37:36.410262 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:36.410269 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:36.410276 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:36.410283 | orchestrator | 2026-03-28 02:37:36.410289 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-28 02:37:36.410296 | orchestrator | Saturday 28 March 2026 02:37:29 +0000 (0:00:00.458) 0:03:02.328 ******** 2026-03-28 02:37:36.410302 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:36.410309 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:36.410316 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:36.410322 | orchestrator | 2026-03-28 02:37:36.410329 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-28 02:37:36.410336 | orchestrator | Saturday 28 March 2026 02:37:30 +0000 (0:00:01.321) 0:03:03.650 ******** 2026-03-28 02:37:36.410343 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:36.410352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:36.410359 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:36.410367 | orchestrator | 2026-03-28 02:37:36.410374 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-28 02:37:36.410382 | orchestrator | Saturday 28 March 2026 02:37:31 +0000 (0:00:00.336) 0:03:03.986 ******** 2026-03-28 02:37:36.410389 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:36.410395 | orchestrator | 2026-03-28 02:37:36.410402 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-28 02:37:36.410409 | orchestrator | Saturday 28 March 2026 02:37:32 +0000 (0:00:01.527) 0:03:05.513 ******** 2026-03-28 02:37:36.410432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 02:37:36.410446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.410456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.410476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.410485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:36.410499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.585851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.585952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.585971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.586005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:36.586086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.586114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:36.586154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.586173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.586214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:36.586247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 02:37:36.586265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:36.586280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.586308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.733662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.733852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:36.733871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.733886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.733900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.733912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.733949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:36.733971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.733983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:36.733995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 02:37:36.734008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:36.734078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:36.734103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.026682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:37.026878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.026912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:37.026933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.026974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:37.027054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.027070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:37.027084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:37.027097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.027110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:37.027128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:37.027155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:38.085267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.085405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.085425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:38.085445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.085470 | orchestrator | 2026-03-28 02:37:38.085489 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-28 02:37:38.085560 | orchestrator | Saturday 28 March 2026 02:37:37 +0000 (0:00:04.326) 0:03:09.840 ******** 2026-03-28 02:37:38.085593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 02:37:38.085629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.085645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.085660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.085674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:38.085705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.085720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.085766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 02:37:38.168465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.168612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.168642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.168718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.168812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.168865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.168889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.168910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:38.168946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:38.168977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.169000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.169023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.169057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 02:37:38.258635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.258689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:38.258707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.258815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258854 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:38.258868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.258880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.258912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-28 02:37:38.471423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:38.471577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.471606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.471628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.471657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.471678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:38.471724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:38.471787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.471825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.471854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:38.471876 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:38.471900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:38.471922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-28 02:37:38.471957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-28 02:37:49.272818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 02:37:49.272964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 02:37:49.273000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 02:37:49.273014 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:49.273028 | orchestrator | 2026-03-28 02:37:49.273041 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-28 02:37:49.273053 | orchestrator | Saturday 28 March 2026 02:37:38 +0000 (0:00:01.446) 0:03:11.287 ******** 2026-03-28 02:37:49.273065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273089 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:49.273100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:49.273133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-28 02:37:49.273163 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:49.273174 | orchestrator | 2026-03-28 02:37:49.273185 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-28 02:37:49.273196 | orchestrator | Saturday 28 March 2026 02:37:40 +0000 (0:00:01.969) 0:03:13.256 ******** 2026-03-28 02:37:49.273208 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:49.273218 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:49.273247 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:49.273260 | orchestrator | 2026-03-28 02:37:49.273271 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-28 02:37:49.273284 | orchestrator | Saturday 28 March 2026 02:37:41 +0000 (0:00:01.410) 0:03:14.667 ******** 2026-03-28 02:37:49.273296 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:49.273308 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:49.273321 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:49.273334 | orchestrator | 2026-03-28 02:37:49.273347 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-28 02:37:49.273359 | orchestrator | Saturday 28 March 2026 02:37:43 +0000 (0:00:02.128) 0:03:16.796 ******** 2026-03-28 02:37:49.273371 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:49.273385 | orchestrator | 2026-03-28 02:37:49.273398 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-28 02:37:49.273417 | orchestrator | Saturday 28 March 2026 02:37:45 +0000 (0:00:01.342) 0:03:18.138 ******** 2026-03-28 02:37:49.273451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 02:37:49.273483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 02:37:49.273503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 02:37:49.273534 | orchestrator | 2026-03-28 02:37:49.273554 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-28 02:37:49.273575 | orchestrator | Saturday 28 March 2026 02:37:48 +0000 (0:00:03.495) 0:03:21.633 ******** 2026-03-28 02:37:49.273607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 02:37:59.433186 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:59.433356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 02:37:59.433383 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:59.433428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 02:37:59.433447 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:59.433465 | orchestrator | 2026-03-28 02:37:59.433485 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-28 02:37:59.433505 | orchestrator | Saturday 28 March 2026 02:37:49 +0000 (0:00:00.456) 0:03:22.089 ******** 2026-03-28 02:37:59.433526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433602 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:37:59.433620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433656 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:37:59.433675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-28 02:37:59.433714 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:37:59.433732 | orchestrator | 2026-03-28 02:37:59.433780 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-28 02:37:59.433799 | orchestrator | Saturday 28 March 2026 02:37:49 +0000 (0:00:00.710) 0:03:22.800 ******** 2026-03-28 02:37:59.433818 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:59.433836 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:59.433853 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:59.433871 | orchestrator | 2026-03-28 02:37:59.433890 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-28 02:37:59.433908 | orchestrator | Saturday 28 March 2026 02:37:51 +0000 (0:00:01.814) 0:03:24.615 ******** 2026-03-28 02:37:59.433927 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:37:59.433946 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:37:59.433993 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:37:59.434011 | orchestrator | 2026-03-28 02:37:59.434112 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-28 02:37:59.434133 | orchestrator | Saturday 28 March 2026 02:37:53 +0000 (0:00:01.866) 0:03:26.481 ******** 2026-03-28 02:37:59.434152 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:37:59.434171 | orchestrator | 2026-03-28 02:37:59.434191 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-28 02:37:59.434209 | orchestrator | Saturday 28 March 2026 02:37:55 +0000 (0:00:01.591) 0:03:28.072 ******** 2026-03-28 02:37:59.434234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 02:37:59.434286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:37:59.434309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:37:59.434341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 02:38:00.651229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 02:38:00.651421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651445 | orchestrator | 2026-03-28 02:38:00.651459 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-28 02:38:00.651471 | orchestrator | Saturday 28 March 2026 02:37:59 +0000 (0:00:04.172) 0:03:32.245 ******** 2026-03-28 02:38:00.651503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 02:38:00.651525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:38:00.651555 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:00.651568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 02:38:00.651588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:38:11.367878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:38:11.368005 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:11.368060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 02:38:11.368122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 02:38:11.368145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 02:38:11.368163 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:11.368180 | orchestrator | 2026-03-28 02:38:11.368198 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-28 02:38:11.368218 | orchestrator | Saturday 28 March 2026 02:38:00 +0000 (0:00:01.215) 0:03:33.461 ******** 2026-03-28 02:38:11.368239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368344 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:11.368357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:11.368432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-28 02:38:11.368491 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:11.368505 | orchestrator | 2026-03-28 02:38:11.368518 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-28 02:38:11.368530 | orchestrator | Saturday 28 March 2026 02:38:01 +0000 (0:00:00.892) 0:03:34.353 ******** 2026-03-28 02:38:11.368544 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:11.368556 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:11.368569 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:11.368581 | orchestrator | 2026-03-28 02:38:11.368594 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-28 02:38:11.368606 | orchestrator | Saturday 28 March 2026 02:38:02 +0000 (0:00:01.445) 0:03:35.799 ******** 2026-03-28 02:38:11.368619 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:11.368631 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:11.368644 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:11.368656 | orchestrator | 2026-03-28 02:38:11.368669 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-28 02:38:11.368680 | orchestrator | Saturday 28 March 2026 02:38:05 +0000 (0:00:02.161) 0:03:37.960 ******** 2026-03-28 02:38:11.368691 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:38:11.368702 | orchestrator | 2026-03-28 02:38:11.368712 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-28 02:38:11.368723 | orchestrator | Saturday 28 March 2026 02:38:06 +0000 (0:00:01.560) 0:03:39.520 ******** 2026-03-28 02:38:11.368734 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-28 02:38:11.368775 | orchestrator | 2026-03-28 02:38:11.368787 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-28 02:38:11.368798 | orchestrator | Saturday 28 March 2026 02:38:07 +0000 (0:00:00.834) 0:03:40.354 ******** 2026-03-28 02:38:11.368811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 02:38:11.368840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 02:38:23.548704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 02:38:23.548818 | orchestrator | 2026-03-28 02:38:23.548830 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-28 02:38:23.548838 | orchestrator | Saturday 28 March 2026 02:38:11 +0000 (0:00:03.823) 0:03:44.178 ******** 2026-03-28 02:38:23.548847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.548859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:23.548888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.548899 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:23.548908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.548917 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:23.548926 | orchestrator | 2026-03-28 02:38:23.548934 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-28 02:38:23.548942 | orchestrator | Saturday 28 March 2026 02:38:12 +0000 (0:00:01.377) 0:03:45.556 ******** 2026-03-28 02:38:23.548952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.548966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.548996 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:23.549007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.549013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.549019 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:23.549024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.549030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 02:38:23.549047 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:23.549052 | orchestrator | 2026-03-28 02:38:23.549058 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 02:38:23.549063 | orchestrator | Saturday 28 March 2026 02:38:14 +0000 (0:00:01.595) 0:03:47.152 ******** 2026-03-28 02:38:23.549068 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:23.549073 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:23.549078 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:23.549083 | orchestrator | 2026-03-28 02:38:23.549088 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 02:38:23.549094 | orchestrator | Saturday 28 March 2026 02:38:17 +0000 (0:00:02.705) 0:03:49.858 ******** 2026-03-28 02:38:23.549099 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:23.549104 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:23.549109 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:23.549114 | orchestrator | 2026-03-28 02:38:23.549123 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-28 02:38:23.549134 | orchestrator | Saturday 28 March 2026 02:38:20 +0000 (0:00:02.988) 0:03:52.846 ******** 2026-03-28 02:38:23.549147 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-28 02:38:23.549157 | orchestrator | 2026-03-28 02:38:23.549165 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-28 02:38:23.549173 | orchestrator | Saturday 28 March 2026 02:38:21 +0000 (0:00:01.119) 0:03:53.965 ******** 2026-03-28 02:38:23.549189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.549199 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:23.549208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.549236 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:23.549242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.549247 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:23.549253 | orchestrator | 2026-03-28 02:38:23.549258 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-28 02:38:23.549264 | orchestrator | Saturday 28 March 2026 02:38:22 +0000 (0:00:01.081) 0:03:55.047 ******** 2026-03-28 02:38:23.549270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.549276 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:23.549282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:23.549294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:46.580966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 02:38:46.581058 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:46.581070 | orchestrator | 2026-03-28 02:38:46.581078 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-28 02:38:46.581087 | orchestrator | Saturday 28 March 2026 02:38:23 +0000 (0:00:01.310) 0:03:56.357 ******** 2026-03-28 02:38:46.581095 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:46.581102 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:46.581109 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:46.581115 | orchestrator | 2026-03-28 02:38:46.581122 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 02:38:46.581129 | orchestrator | Saturday 28 March 2026 02:38:25 +0000 (0:00:01.550) 0:03:57.907 ******** 2026-03-28 02:38:46.581136 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:38:46.581144 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:38:46.581151 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:38:46.581161 | orchestrator | 2026-03-28 02:38:46.581172 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 02:38:46.581184 | orchestrator | Saturday 28 March 2026 02:38:27 +0000 (0:00:02.628) 0:04:00.536 ******** 2026-03-28 02:38:46.581218 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:38:46.581231 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:38:46.581242 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:38:46.581253 | orchestrator | 2026-03-28 02:38:46.581272 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-28 02:38:46.581279 | orchestrator | Saturday 28 March 2026 02:38:30 +0000 (0:00:02.730) 0:04:03.266 ******** 2026-03-28 02:38:46.581286 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-28 02:38:46.581294 | orchestrator | 2026-03-28 02:38:46.581301 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-28 02:38:46.581308 | orchestrator | Saturday 28 March 2026 02:38:31 +0000 (0:00:01.212) 0:04:04.478 ******** 2026-03-28 02:38:46.581315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581322 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:46.581329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581336 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:46.581343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581350 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:46.581357 | orchestrator | 2026-03-28 02:38:46.581364 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-28 02:38:46.581372 | orchestrator | Saturday 28 March 2026 02:38:32 +0000 (0:00:01.260) 0:04:05.739 ******** 2026-03-28 02:38:46.581393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581401 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:46.581408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:46.581428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 02:38:46.581435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:46.581442 | orchestrator | 2026-03-28 02:38:46.581465 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-28 02:38:46.581472 | orchestrator | Saturday 28 March 2026 02:38:34 +0000 (0:00:01.289) 0:04:07.028 ******** 2026-03-28 02:38:46.581479 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:46.581486 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:46.581492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:46.581499 | orchestrator | 2026-03-28 02:38:46.581506 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 02:38:46.581513 | orchestrator | Saturday 28 March 2026 02:38:35 +0000 (0:00:01.777) 0:04:08.806 ******** 2026-03-28 02:38:46.581519 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:38:46.581526 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:38:46.581533 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:38:46.581539 | orchestrator | 2026-03-28 02:38:46.581546 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 02:38:46.581553 | orchestrator | Saturday 28 March 2026 02:38:38 +0000 (0:00:02.362) 0:04:11.168 ******** 2026-03-28 02:38:46.581560 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:38:46.581567 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:38:46.581574 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:38:46.581580 | orchestrator | 2026-03-28 02:38:46.581587 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-28 02:38:46.581594 | orchestrator | Saturday 28 March 2026 02:38:41 +0000 (0:00:03.371) 0:04:14.540 ******** 2026-03-28 02:38:46.581602 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:38:46.581613 | orchestrator | 2026-03-28 02:38:46.581623 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-28 02:38:46.581633 | orchestrator | Saturday 28 March 2026 02:38:43 +0000 (0:00:01.619) 0:04:16.160 ******** 2026-03-28 02:38:46.581646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 02:38:46.581670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 02:38:46.739617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:46.739716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:46.739786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 02:38:46.739878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:46.739892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:46.739913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:46.739923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:46.739982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:46.739993 | orchestrator | 2026-03-28 02:38:46.740013 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-28 02:38:47.351519 | orchestrator | Saturday 28 March 2026 02:38:46 +0000 (0:00:03.398) 0:04:19.558 ******** 2026-03-28 02:38:47.351643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 02:38:47.351666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:47.351681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:47.351694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:47.351707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:47.351836 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:47.351891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 02:38:47.351914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:47.351939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:47.351951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:47.351963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:47.351984 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:47.351996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 02:38:47.352017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 02:38:58.997576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 02:38:58.997665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 02:38:58.997674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 02:38:58.997680 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:58.997687 | orchestrator | 2026-03-28 02:38:58.997692 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-28 02:38:58.997698 | orchestrator | Saturday 28 March 2026 02:38:47 +0000 (0:00:00.745) 0:04:20.304 ******** 2026-03-28 02:38:58.997704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997733 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:38:58.997754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997764 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:38:58.997768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 02:38:58.997777 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:38:58.997781 | orchestrator | 2026-03-28 02:38:58.997786 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-28 02:38:58.997790 | orchestrator | Saturday 28 March 2026 02:38:48 +0000 (0:00:00.929) 0:04:21.233 ******** 2026-03-28 02:38:58.997795 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:58.997799 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:58.997804 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:58.997808 | orchestrator | 2026-03-28 02:38:58.997812 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-28 02:38:58.997817 | orchestrator | Saturday 28 March 2026 02:38:50 +0000 (0:00:01.747) 0:04:22.980 ******** 2026-03-28 02:38:58.997821 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:38:58.997826 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:38:58.997830 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:38:58.997835 | orchestrator | 2026-03-28 02:38:58.997848 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-28 02:38:58.997853 | orchestrator | Saturday 28 March 2026 02:38:52 +0000 (0:00:02.124) 0:04:25.105 ******** 2026-03-28 02:38:58.997858 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:38:58.997863 | orchestrator | 2026-03-28 02:38:58.997867 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-28 02:38:58.997871 | orchestrator | Saturday 28 March 2026 02:38:53 +0000 (0:00:01.359) 0:04:26.465 ******** 2026-03-28 02:38:58.997880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:38:58.997887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:38:58.997897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:38:58.997903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:38:58.997915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:39:00.997987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:39:00.998128 | orchestrator | 2026-03-28 02:39:00.998142 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-28 02:39:00.998152 | orchestrator | Saturday 28 March 2026 02:38:58 +0000 (0:00:05.340) 0:04:31.806 ******** 2026-03-28 02:39:00.998162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:39:00.998173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:39:00.998183 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:00.998207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:39:00.998234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:39:00.998249 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:00.998258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:39:00.998290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:39:00.998300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:00.998309 | orchestrator | 2026-03-28 02:39:00.998317 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-28 02:39:00.998326 | orchestrator | Saturday 28 March 2026 02:39:00 +0000 (0:00:01.060) 0:04:32.867 ******** 2026-03-28 02:39:00.998336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 02:39:00.998346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:00.998357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:00.998373 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:00.998393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 02:39:07.120055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:07.120169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:07.120186 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:07.120198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-28 02:39:07.120209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:07.120218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-28 02:39:07.120228 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:07.120243 | orchestrator | 2026-03-28 02:39:07.120269 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-28 02:39:07.120285 | orchestrator | Saturday 28 March 2026 02:39:00 +0000 (0:00:00.947) 0:04:33.814 ******** 2026-03-28 02:39:07.120299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:07.120312 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:07.120326 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:07.120337 | orchestrator | 2026-03-28 02:39:07.120349 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-28 02:39:07.120361 | orchestrator | Saturday 28 March 2026 02:39:01 +0000 (0:00:00.455) 0:04:34.270 ******** 2026-03-28 02:39:07.120373 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:07.120385 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:07.120398 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:07.120410 | orchestrator | 2026-03-28 02:39:07.120422 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-28 02:39:07.120436 | orchestrator | Saturday 28 March 2026 02:39:02 +0000 (0:00:01.484) 0:04:35.755 ******** 2026-03-28 02:39:07.120449 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:39:07.120463 | orchestrator | 2026-03-28 02:39:07.120477 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-28 02:39:07.120490 | orchestrator | Saturday 28 March 2026 02:39:04 +0000 (0:00:01.755) 0:04:37.510 ******** 2026-03-28 02:39:07.120507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 02:39:07.120556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:07.120592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:07.120632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:07.120645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:07.120657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 02:39:07.120668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:07.120678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:07.120697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:07.120707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:07.120731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 02:39:08.729407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:08.729501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:08.729515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:08.729526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:08.729566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 02:39:08.729602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:08.729628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:08.729639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:08.729648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:08.729657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 02:39:08.729674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:08.729688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:08.729705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.423402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:09.423510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 02:39:09.423556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:09.423571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.423600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.423614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:09.423627 | orchestrator | 2026-03-28 02:39:09.423641 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-28 02:39:09.423673 | orchestrator | Saturday 28 March 2026 02:39:08 +0000 (0:00:04.182) 0:04:41.693 ******** 2026-03-28 02:39:09.423687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 02:39:09.423700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:09.423722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.423733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.423798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:09.423819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 02:39:09.423843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:09.579453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.579599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.579636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 02:39:09.579663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:09.579689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:09.579702 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:09.579716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.579728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.579814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:09.579844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 02:39:09.579858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:09.579876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:09.579888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 02:39:09.579907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:11.168222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 02:39:11.168313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:11.168327 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:11.168339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:11.168349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:11.168378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 02:39:11.168391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 02:39:11.168418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-28 02:39:11.168450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:11.168461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 02:39:11.168470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 02:39:11.168479 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:11.168489 | orchestrator | 2026-03-28 02:39:11.168499 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-28 02:39:11.168509 | orchestrator | Saturday 28 March 2026 02:39:09 +0000 (0:00:00.856) 0:04:42.550 ******** 2026-03-28 02:39:11.168523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 02:39:11.168535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 02:39:11.168546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:11.168557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:11.168568 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:11.168577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 02:39:11.168592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 02:39:11.168602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:11.168616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:17.386821 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:17.386935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-28 02:39:17.386953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-28 02:39:17.386968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:17.386981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-28 02:39:17.386993 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:17.387003 | orchestrator | 2026-03-28 02:39:17.387015 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-28 02:39:17.387026 | orchestrator | Saturday 28 March 2026 02:39:11 +0000 (0:00:01.429) 0:04:43.980 ******** 2026-03-28 02:39:17.387036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:17.387046 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:17.387056 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:17.387066 | orchestrator | 2026-03-28 02:39:17.387076 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-28 02:39:17.387086 | orchestrator | Saturday 28 March 2026 02:39:11 +0000 (0:00:00.452) 0:04:44.432 ******** 2026-03-28 02:39:17.387096 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:17.387113 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:17.387130 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:17.387146 | orchestrator | 2026-03-28 02:39:17.387164 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-28 02:39:17.387182 | orchestrator | Saturday 28 March 2026 02:39:12 +0000 (0:00:01.378) 0:04:45.811 ******** 2026-03-28 02:39:17.387199 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:39:17.387215 | orchestrator | 2026-03-28 02:39:17.387226 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-28 02:39:17.387237 | orchestrator | Saturday 28 March 2026 02:39:14 +0000 (0:00:01.805) 0:04:47.616 ******** 2026-03-28 02:39:17.387251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:39:17.387291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:39:17.387363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:39:17.387378 | orchestrator | 2026-03-28 02:39:17.387389 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-28 02:39:17.387402 | orchestrator | Saturday 28 March 2026 02:39:16 +0000 (0:00:02.146) 0:04:49.762 ******** 2026-03-28 02:39:17.387418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 02:39:17.387440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 02:39:17.387453 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:17.387468 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:17.387498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 02:39:29.238631 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:29.238732 | orchestrator | 2026-03-28 02:39:29.238806 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-28 02:39:29.238821 | orchestrator | Saturday 28 March 2026 02:39:17 +0000 (0:00:00.441) 0:04:50.204 ******** 2026-03-28 02:39:29.238833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 02:39:29.238843 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:29.238851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 02:39:29.238858 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:29.238865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 02:39:29.238872 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:29.238879 | orchestrator | 2026-03-28 02:39:29.238886 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-28 02:39:29.238893 | orchestrator | Saturday 28 March 2026 02:39:18 +0000 (0:00:01.033) 0:04:51.238 ******** 2026-03-28 02:39:29.238900 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:29.238906 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:29.238913 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:29.238920 | orchestrator | 2026-03-28 02:39:29.238927 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-28 02:39:29.238933 | orchestrator | Saturday 28 March 2026 02:39:18 +0000 (0:00:00.524) 0:04:51.762 ******** 2026-03-28 02:39:29.238940 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:29.238965 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:29.238973 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:29.238979 | orchestrator | 2026-03-28 02:39:29.238986 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-28 02:39:29.238993 | orchestrator | Saturday 28 March 2026 02:39:20 +0000 (0:00:01.383) 0:04:53.146 ******** 2026-03-28 02:39:29.239000 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:39:29.239007 | orchestrator | 2026-03-28 02:39:29.239014 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-28 02:39:29.239020 | orchestrator | Saturday 28 March 2026 02:39:21 +0000 (0:00:01.556) 0:04:54.702 ******** 2026-03-28 02:39:29.239042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 02:39:29.239120 | orchestrator | 2026-03-28 02:39:29.239137 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-28 02:39:29.239145 | orchestrator | Saturday 28 March 2026 02:39:28 +0000 (0:00:06.639) 0:05:01.342 ******** 2026-03-28 02:39:29.239158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:31.571340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:31.571380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 02:39:31.571435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:31.571447 | orchestrator | 2026-03-28 02:39:31.571459 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-28 02:39:31.571472 | orchestrator | Saturday 28 March 2026 02:39:29 +0000 (0:00:00.711) 0:05:02.053 ******** 2026-03-28 02:39:31.571484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571539 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:39:31.571550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571595 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:39:31.571608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-28 02:39:31.571660 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:39:31.571673 | orchestrator | 2026-03-28 02:39:31.571693 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-28 02:39:31.571706 | orchestrator | Saturday 28 March 2026 02:39:30 +0000 (0:00:01.013) 0:05:03.067 ******** 2026-03-28 02:39:31.571719 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:39:31.571732 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:39:31.571777 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:39:31.571796 | orchestrator | 2026-03-28 02:39:31.571817 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-28 02:39:31.571847 | orchestrator | Saturday 28 March 2026 02:39:31 +0000 (0:00:01.311) 0:05:04.379 ******** 2026-03-28 02:40:22.181419 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:22.181534 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:22.181551 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:22.181563 | orchestrator | 2026-03-28 02:40:22.181576 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-28 02:40:22.181589 | orchestrator | Saturday 28 March 2026 02:39:33 +0000 (0:00:02.266) 0:05:06.645 ******** 2026-03-28 02:40:22.181601 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.181612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.181623 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.181634 | orchestrator | 2026-03-28 02:40:22.181646 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-28 02:40:22.181657 | orchestrator | Saturday 28 March 2026 02:39:34 +0000 (0:00:00.685) 0:05:07.331 ******** 2026-03-28 02:40:22.181668 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.181679 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.181690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.181701 | orchestrator | 2026-03-28 02:40:22.181713 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-28 02:40:22.181724 | orchestrator | Saturday 28 March 2026 02:39:34 +0000 (0:00:00.337) 0:05:07.668 ******** 2026-03-28 02:40:22.181734 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.181859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.181874 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.181893 | orchestrator | 2026-03-28 02:40:22.181912 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-28 02:40:22.181931 | orchestrator | Saturday 28 March 2026 02:39:35 +0000 (0:00:00.369) 0:05:08.037 ******** 2026-03-28 02:40:22.181949 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.181968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.181985 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.182003 | orchestrator | 2026-03-28 02:40:22.182103 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-28 02:40:22.182125 | orchestrator | Saturday 28 March 2026 02:39:35 +0000 (0:00:00.350) 0:05:08.388 ******** 2026-03-28 02:40:22.182143 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.182161 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.182180 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.182198 | orchestrator | 2026-03-28 02:40:22.182216 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-28 02:40:22.182258 | orchestrator | Saturday 28 March 2026 02:39:36 +0000 (0:00:00.668) 0:05:09.056 ******** 2026-03-28 02:40:22.182281 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.182300 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.182320 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.182341 | orchestrator | 2026-03-28 02:40:22.182359 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-28 02:40:22.182379 | orchestrator | Saturday 28 March 2026 02:39:36 +0000 (0:00:00.557) 0:05:09.614 ******** 2026-03-28 02:40:22.182399 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182414 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182425 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182436 | orchestrator | 2026-03-28 02:40:22.182447 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-28 02:40:22.182483 | orchestrator | Saturday 28 March 2026 02:39:37 +0000 (0:00:00.667) 0:05:10.281 ******** 2026-03-28 02:40:22.182495 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182506 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182516 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182527 | orchestrator | 2026-03-28 02:40:22.182538 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-28 02:40:22.182549 | orchestrator | Saturday 28 March 2026 02:39:38 +0000 (0:00:00.686) 0:05:10.967 ******** 2026-03-28 02:40:22.182559 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182570 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182581 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182591 | orchestrator | 2026-03-28 02:40:22.182602 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-28 02:40:22.182613 | orchestrator | Saturday 28 March 2026 02:39:39 +0000 (0:00:00.876) 0:05:11.844 ******** 2026-03-28 02:40:22.182624 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182635 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182645 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182656 | orchestrator | 2026-03-28 02:40:22.182667 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-28 02:40:22.182678 | orchestrator | Saturday 28 March 2026 02:39:39 +0000 (0:00:00.890) 0:05:12.735 ******** 2026-03-28 02:40:22.182689 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182700 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182711 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182721 | orchestrator | 2026-03-28 02:40:22.182732 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-28 02:40:22.182775 | orchestrator | Saturday 28 March 2026 02:39:40 +0000 (0:00:00.970) 0:05:13.705 ******** 2026-03-28 02:40:22.182791 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:22.182817 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:22.182839 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:22.182850 | orchestrator | 2026-03-28 02:40:22.182861 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-28 02:40:22.182872 | orchestrator | Saturday 28 March 2026 02:39:50 +0000 (0:00:09.717) 0:05:23.422 ******** 2026-03-28 02:40:22.182883 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.182893 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.182904 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.182915 | orchestrator | 2026-03-28 02:40:22.182926 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-28 02:40:22.182937 | orchestrator | Saturday 28 March 2026 02:39:51 +0000 (0:00:01.212) 0:05:24.634 ******** 2026-03-28 02:40:22.182948 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:22.182959 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:22.182970 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:22.182981 | orchestrator | 2026-03-28 02:40:22.182992 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-28 02:40:22.183003 | orchestrator | Saturday 28 March 2026 02:40:07 +0000 (0:00:15.428) 0:05:40.063 ******** 2026-03-28 02:40:22.183014 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.183102 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.183117 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.183136 | orchestrator | 2026-03-28 02:40:22.183153 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-28 02:40:22.183172 | orchestrator | Saturday 28 March 2026 02:40:07 +0000 (0:00:00.748) 0:05:40.812 ******** 2026-03-28 02:40:22.183190 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:22.183206 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:22.183224 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:22.183242 | orchestrator | 2026-03-28 02:40:22.183261 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-28 02:40:22.183280 | orchestrator | Saturday 28 March 2026 02:40:13 +0000 (0:00:05.497) 0:05:46.309 ******** 2026-03-28 02:40:22.183325 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183338 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183349 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183368 | orchestrator | 2026-03-28 02:40:22.183386 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-28 02:40:22.183404 | orchestrator | Saturday 28 March 2026 02:40:14 +0000 (0:00:00.737) 0:05:47.046 ******** 2026-03-28 02:40:22.183423 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183442 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183462 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183480 | orchestrator | 2026-03-28 02:40:22.183498 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-28 02:40:22.183516 | orchestrator | Saturday 28 March 2026 02:40:14 +0000 (0:00:00.382) 0:05:47.428 ******** 2026-03-28 02:40:22.183527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183538 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183549 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183559 | orchestrator | 2026-03-28 02:40:22.183570 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-28 02:40:22.183580 | orchestrator | Saturday 28 March 2026 02:40:14 +0000 (0:00:00.358) 0:05:47.787 ******** 2026-03-28 02:40:22.183591 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183602 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183613 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183623 | orchestrator | 2026-03-28 02:40:22.183634 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-28 02:40:22.183645 | orchestrator | Saturday 28 March 2026 02:40:15 +0000 (0:00:00.388) 0:05:48.176 ******** 2026-03-28 02:40:22.183655 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183675 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183697 | orchestrator | 2026-03-28 02:40:22.183707 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-28 02:40:22.183718 | orchestrator | Saturday 28 March 2026 02:40:16 +0000 (0:00:00.734) 0:05:48.910 ******** 2026-03-28 02:40:22.183729 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:22.183765 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:22.183778 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:22.183789 | orchestrator | 2026-03-28 02:40:22.183800 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-28 02:40:22.183811 | orchestrator | Saturday 28 March 2026 02:40:16 +0000 (0:00:00.364) 0:05:49.275 ******** 2026-03-28 02:40:22.183821 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.183832 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.183842 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.183853 | orchestrator | 2026-03-28 02:40:22.183864 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-28 02:40:22.183874 | orchestrator | Saturday 28 March 2026 02:40:21 +0000 (0:00:04.888) 0:05:54.163 ******** 2026-03-28 02:40:22.183885 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:22.183895 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:22.183906 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:22.183917 | orchestrator | 2026-03-28 02:40:22.183927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:40:22.183939 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 02:40:22.183952 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 02:40:22.183962 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-28 02:40:22.183973 | orchestrator | 2026-03-28 02:40:22.183994 | orchestrator | 2026-03-28 02:40:22.184005 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:40:22.184015 | orchestrator | Saturday 28 March 2026 02:40:22 +0000 (0:00:00.811) 0:05:54.975 ******** 2026-03-28 02:40:22.184026 | orchestrator | =============================================================================== 2026-03-28 02:40:22.184037 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.43s 2026-03-28 02:40:22.184047 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.72s 2026-03-28 02:40:22.184058 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.64s 2026-03-28 02:40:22.184069 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.50s 2026-03-28 02:40:22.184079 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.34s 2026-03-28 02:40:22.184090 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.89s 2026-03-28 02:40:22.184100 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.33s 2026-03-28 02:40:22.184111 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.18s 2026-03-28 02:40:22.184121 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.17s 2026-03-28 02:40:22.184143 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.09s 2026-03-28 02:40:23.058897 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.82s 2026-03-28 02:40:23.058972 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.66s 2026-03-28 02:40:23.058978 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.60s 2026-03-28 02:40:23.058982 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.50s 2026-03-28 02:40:23.058986 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.45s 2026-03-28 02:40:23.058991 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.40s 2026-03-28 02:40:23.058995 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.40s 2026-03-28 02:40:23.058999 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.37s 2026-03-28 02:40:23.059003 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.35s 2026-03-28 02:40:23.059007 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.33s 2026-03-28 02:40:25.569387 | orchestrator | 2026-03-28 02:40:25 | INFO  | Task cc020754-6dfb-4172-874b-b811a74b8bf5 (opensearch) was prepared for execution. 2026-03-28 02:40:25.569473 | orchestrator | 2026-03-28 02:40:25 | INFO  | It takes a moment until task cc020754-6dfb-4172-874b-b811a74b8bf5 (opensearch) has been started and output is visible here. 2026-03-28 02:40:36.498939 | orchestrator | 2026-03-28 02:40:36.499020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:40:36.499028 | orchestrator | 2026-03-28 02:40:36.499033 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:40:36.499038 | orchestrator | Saturday 28 March 2026 02:40:29 +0000 (0:00:00.267) 0:00:00.267 ******** 2026-03-28 02:40:36.499043 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:40:36.499048 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:40:36.499053 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:40:36.499057 | orchestrator | 2026-03-28 02:40:36.499062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:40:36.499066 | orchestrator | Saturday 28 March 2026 02:40:30 +0000 (0:00:00.340) 0:00:00.607 ******** 2026-03-28 02:40:36.499107 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-28 02:40:36.499114 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-28 02:40:36.499118 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-28 02:40:36.499123 | orchestrator | 2026-03-28 02:40:36.499127 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-28 02:40:36.499146 | orchestrator | 2026-03-28 02:40:36.499150 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 02:40:36.499155 | orchestrator | Saturday 28 March 2026 02:40:30 +0000 (0:00:00.444) 0:00:01.051 ******** 2026-03-28 02:40:36.499160 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:40:36.499164 | orchestrator | 2026-03-28 02:40:36.499169 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-28 02:40:36.499173 | orchestrator | Saturday 28 March 2026 02:40:31 +0000 (0:00:00.539) 0:00:01.590 ******** 2026-03-28 02:40:36.499178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:40:36.499182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:40:36.499188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 02:40:36.499192 | orchestrator | 2026-03-28 02:40:36.499197 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-28 02:40:36.499201 | orchestrator | Saturday 28 March 2026 02:40:31 +0000 (0:00:00.688) 0:00:02.279 ******** 2026-03-28 02:40:36.499208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:36.499216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:36.499232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:36.499243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:36.499253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:36.499259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:36.499263 | orchestrator | 2026-03-28 02:40:36.499268 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 02:40:36.499272 | orchestrator | Saturday 28 March 2026 02:40:33 +0000 (0:00:01.772) 0:00:04.051 ******** 2026-03-28 02:40:36.499277 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:40:36.499281 | orchestrator | 2026-03-28 02:40:36.499286 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-28 02:40:36.499290 | orchestrator | Saturday 28 March 2026 02:40:34 +0000 (0:00:00.537) 0:00:04.589 ******** 2026-03-28 02:40:36.499302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:37.329910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:37.329997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:37.330010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:37.330061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:37.330118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:37.330127 | orchestrator | 2026-03-28 02:40:37.330135 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-28 02:40:37.330143 | orchestrator | Saturday 28 March 2026 02:40:36 +0000 (0:00:02.379) 0:00:06.968 ******** 2026-03-28 02:40:37.330151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:37.330158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:37.330165 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:37.330173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:37.330195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:38.375807 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:38.375928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:38.375950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:38.375965 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:38.375977 | orchestrator | 2026-03-28 02:40:38.375989 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-28 02:40:38.376002 | orchestrator | Saturday 28 March 2026 02:40:37 +0000 (0:00:00.836) 0:00:07.805 ******** 2026-03-28 02:40:38.376040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:38.376069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:38.376102 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:40:38.376115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:38.376127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:38.376139 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:40:38.376169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-28 02:40:38.376187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-28 02:40:38.376199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:40:38.376210 | orchestrator | 2026-03-28 02:40:38.376252 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-28 02:40:38.376274 | orchestrator | Saturday 28 March 2026 02:40:38 +0000 (0:00:01.039) 0:00:08.844 ******** 2026-03-28 02:40:46.388866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:46.389001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:46.389030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:46.389107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:46.389163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:46.389178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:40:46.389201 | orchestrator | 2026-03-28 02:40:46.389215 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-28 02:40:46.389227 | orchestrator | Saturday 28 March 2026 02:40:40 +0000 (0:00:02.237) 0:00:11.082 ******** 2026-03-28 02:40:46.389239 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:46.389252 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:46.389263 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:46.389274 | orchestrator | 2026-03-28 02:40:46.389285 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-28 02:40:46.389297 | orchestrator | Saturday 28 March 2026 02:40:42 +0000 (0:00:02.313) 0:00:13.396 ******** 2026-03-28 02:40:46.389308 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:40:46.389319 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:40:46.389332 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:40:46.389346 | orchestrator | 2026-03-28 02:40:46.389358 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-28 02:40:46.389371 | orchestrator | Saturday 28 March 2026 02:40:44 +0000 (0:00:01.806) 0:00:15.202 ******** 2026-03-28 02:40:46.389385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:46.389405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:40:46.389428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-28 02:43:41.579849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:43:41.579984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:43:41.580014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-28 02:43:41.580025 | orchestrator | 2026-03-28 02:43:41.580036 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 02:43:41.580045 | orchestrator | Saturday 28 March 2026 02:40:46 +0000 (0:00:01.658) 0:00:16.860 ******** 2026-03-28 02:43:41.580052 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:43:41.580062 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:43:41.580070 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:43:41.580077 | orchestrator | 2026-03-28 02:43:41.580086 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 02:43:41.580095 | orchestrator | Saturday 28 March 2026 02:40:46 +0000 (0:00:00.290) 0:00:17.151 ******** 2026-03-28 02:43:41.580103 | orchestrator | 2026-03-28 02:43:41.580111 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 02:43:41.580120 | orchestrator | Saturday 28 March 2026 02:40:46 +0000 (0:00:00.060) 0:00:17.212 ******** 2026-03-28 02:43:41.580129 | orchestrator | 2026-03-28 02:43:41.580137 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 02:43:41.580153 | orchestrator | Saturday 28 March 2026 02:40:46 +0000 (0:00:00.075) 0:00:17.287 ******** 2026-03-28 02:43:41.580162 | orchestrator | 2026-03-28 02:43:41.580171 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-28 02:43:41.580196 | orchestrator | Saturday 28 March 2026 02:40:46 +0000 (0:00:00.072) 0:00:17.360 ******** 2026-03-28 02:43:41.580205 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:43:41.580213 | orchestrator | 2026-03-28 02:43:41.580221 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-28 02:43:41.580228 | orchestrator | Saturday 28 March 2026 02:40:47 +0000 (0:00:00.250) 0:00:17.611 ******** 2026-03-28 02:43:41.580236 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:43:41.580243 | orchestrator | 2026-03-28 02:43:41.580252 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-28 02:43:41.580259 | orchestrator | Saturday 28 March 2026 02:40:47 +0000 (0:00:00.630) 0:00:18.241 ******** 2026-03-28 02:43:41.580267 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:43:41.580274 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:43:41.580281 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:43:41.580289 | orchestrator | 2026-03-28 02:43:41.580297 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-28 02:43:41.580304 | orchestrator | Saturday 28 March 2026 02:42:02 +0000 (0:01:14.685) 0:01:32.926 ******** 2026-03-28 02:43:41.580311 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:43:41.580319 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:43:41.580327 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:43:41.580334 | orchestrator | 2026-03-28 02:43:41.580342 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 02:43:41.580350 | orchestrator | Saturday 28 March 2026 02:43:30 +0000 (0:01:27.945) 0:03:00.872 ******** 2026-03-28 02:43:41.580359 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:43:41.580367 | orchestrator | 2026-03-28 02:43:41.580375 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-28 02:43:41.580383 | orchestrator | Saturday 28 March 2026 02:43:30 +0000 (0:00:00.535) 0:03:01.408 ******** 2026-03-28 02:43:41.580391 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:43:41.580398 | orchestrator | 2026-03-28 02:43:41.580406 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-28 02:43:41.580413 | orchestrator | Saturday 28 March 2026 02:43:33 +0000 (0:00:02.890) 0:03:04.298 ******** 2026-03-28 02:43:41.580421 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:43:41.580429 | orchestrator | 2026-03-28 02:43:41.580438 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-28 02:43:41.580449 | orchestrator | Saturday 28 March 2026 02:43:35 +0000 (0:00:02.168) 0:03:06.467 ******** 2026-03-28 02:43:41.580456 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:43:41.580463 | orchestrator | 2026-03-28 02:43:41.580470 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-28 02:43:41.580477 | orchestrator | Saturday 28 March 2026 02:43:38 +0000 (0:00:02.958) 0:03:09.425 ******** 2026-03-28 02:43:41.580484 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:43:41.580491 | orchestrator | 2026-03-28 02:43:41.580499 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:43:41.580507 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 02:43:41.580516 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:43:41.580530 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:43:41.580538 | orchestrator | 2026-03-28 02:43:41.580546 | orchestrator | 2026-03-28 02:43:41.580560 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:43:41.580568 | orchestrator | Saturday 28 March 2026 02:43:41 +0000 (0:00:02.608) 0:03:12.033 ******** 2026-03-28 02:43:41.580575 | orchestrator | =============================================================================== 2026-03-28 02:43:41.580583 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 87.95s 2026-03-28 02:43:41.580592 | orchestrator | opensearch : Restart opensearch container ------------------------------ 74.69s 2026-03-28 02:43:41.580601 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.96s 2026-03-28 02:43:41.580609 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.89s 2026-03-28 02:43:41.580616 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2026-03-28 02:43:41.580623 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.38s 2026-03-28 02:43:41.580631 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.31s 2026-03-28 02:43:41.580638 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.24s 2026-03-28 02:43:41.580645 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2026-03-28 02:43:41.580653 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.81s 2026-03-28 02:43:41.580660 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.77s 2026-03-28 02:43:41.580668 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.66s 2026-03-28 02:43:41.580676 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.04s 2026-03-28 02:43:41.580685 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.84s 2026-03-28 02:43:41.580694 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2026-03-28 02:43:41.580702 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.63s 2026-03-28 02:43:41.580719 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-28 02:43:41.921943 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-28 02:43:41.922093 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-28 02:43:41.922112 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-28 02:43:44.298426 | orchestrator | 2026-03-28 02:43:44 | INFO  | Task 73314f35-4744-49e7-9f3a-f433702dc3fe (memcached) was prepared for execution. 2026-03-28 02:43:44.298552 | orchestrator | 2026-03-28 02:43:44 | INFO  | It takes a moment until task 73314f35-4744-49e7-9f3a-f433702dc3fe (memcached) has been started and output is visible here. 2026-03-28 02:44:01.146301 | orchestrator | 2026-03-28 02:44:01.146400 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:44:01.146414 | orchestrator | 2026-03-28 02:44:01.146424 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:44:01.146433 | orchestrator | Saturday 28 March 2026 02:43:48 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-28 02:44:01.146441 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:44:01.146449 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:44:01.146457 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:44:01.146463 | orchestrator | 2026-03-28 02:44:01.146472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:44:01.146478 | orchestrator | Saturday 28 March 2026 02:43:48 +0000 (0:00:00.299) 0:00:00.568 ******** 2026-03-28 02:44:01.146483 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-28 02:44:01.146488 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-28 02:44:01.146493 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-28 02:44:01.146497 | orchestrator | 2026-03-28 02:44:01.146502 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-28 02:44:01.146526 | orchestrator | 2026-03-28 02:44:01.146531 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-28 02:44:01.146536 | orchestrator | Saturday 28 March 2026 02:43:49 +0000 (0:00:00.426) 0:00:00.995 ******** 2026-03-28 02:44:01.146541 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:44:01.146546 | orchestrator | 2026-03-28 02:44:01.146551 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-28 02:44:01.146555 | orchestrator | Saturday 28 March 2026 02:43:49 +0000 (0:00:00.490) 0:00:01.486 ******** 2026-03-28 02:44:01.146560 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 02:44:01.146564 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 02:44:01.146569 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 02:44:01.146573 | orchestrator | 2026-03-28 02:44:01.146577 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-28 02:44:01.146582 | orchestrator | Saturday 28 March 2026 02:43:50 +0000 (0:00:00.686) 0:00:02.172 ******** 2026-03-28 02:44:01.146586 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 02:44:01.146590 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 02:44:01.146595 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 02:44:01.146599 | orchestrator | 2026-03-28 02:44:01.146603 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-28 02:44:01.146607 | orchestrator | Saturday 28 March 2026 02:43:52 +0000 (0:00:01.699) 0:00:03.871 ******** 2026-03-28 02:44:01.146622 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:44:01.146626 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:01.146631 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:44:01.146635 | orchestrator | 2026-03-28 02:44:01.146639 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-28 02:44:01.146644 | orchestrator | Saturday 28 March 2026 02:43:53 +0000 (0:00:01.433) 0:00:05.305 ******** 2026-03-28 02:44:01.146648 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:01.146652 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:44:01.146656 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:44:01.146661 | orchestrator | 2026-03-28 02:44:01.146665 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:44:01.146669 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:01.146675 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:01.146680 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:01.146684 | orchestrator | 2026-03-28 02:44:01.146688 | orchestrator | 2026-03-28 02:44:01.146693 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:44:01.146697 | orchestrator | Saturday 28 March 2026 02:44:00 +0000 (0:00:07.070) 0:00:12.375 ******** 2026-03-28 02:44:01.146701 | orchestrator | =============================================================================== 2026-03-28 02:44:01.146705 | orchestrator | memcached : Restart memcached container --------------------------------- 7.07s 2026-03-28 02:44:01.146710 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.70s 2026-03-28 02:44:01.146714 | orchestrator | memcached : Check memcached container ----------------------------------- 1.43s 2026-03-28 02:44:01.146719 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.69s 2026-03-28 02:44:01.146723 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.49s 2026-03-28 02:44:01.146727 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-28 02:44:01.146732 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-28 02:44:03.509318 | orchestrator | 2026-03-28 02:44:03 | INFO  | Task 46aa502b-00ee-4575-bf5c-f2bf6d2590dd (redis) was prepared for execution. 2026-03-28 02:44:03.509477 | orchestrator | 2026-03-28 02:44:03 | INFO  | It takes a moment until task 46aa502b-00ee-4575-bf5c-f2bf6d2590dd (redis) has been started and output is visible here. 2026-03-28 02:44:12.731263 | orchestrator | 2026-03-28 02:44:12.731409 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:44:12.731438 | orchestrator | 2026-03-28 02:44:12.731459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:44:12.731479 | orchestrator | Saturday 28 March 2026 02:44:07 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-28 02:44:12.731498 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:44:12.731517 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:44:12.731537 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:44:12.731549 | orchestrator | 2026-03-28 02:44:12.731561 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:44:12.731572 | orchestrator | Saturday 28 March 2026 02:44:08 +0000 (0:00:00.290) 0:00:00.562 ******** 2026-03-28 02:44:12.731583 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-28 02:44:12.731595 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-28 02:44:12.731606 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-28 02:44:12.731617 | orchestrator | 2026-03-28 02:44:12.731628 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-28 02:44:12.731639 | orchestrator | 2026-03-28 02:44:12.731650 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-28 02:44:12.731661 | orchestrator | Saturday 28 March 2026 02:44:08 +0000 (0:00:00.435) 0:00:00.998 ******** 2026-03-28 02:44:12.731672 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:44:12.731684 | orchestrator | 2026-03-28 02:44:12.731695 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-28 02:44:12.731707 | orchestrator | Saturday 28 March 2026 02:44:09 +0000 (0:00:00.489) 0:00:01.488 ******** 2026-03-28 02:44:12.731721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.731930 | orchestrator | 2026-03-28 02:44:12.731946 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-28 02:44:12.731959 | orchestrator | Saturday 28 March 2026 02:44:10 +0000 (0:00:01.077) 0:00:02.566 ******** 2026-03-28 02:44:12.731972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.732034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.732048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.732072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:12.732094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870177 | orchestrator | 2026-03-28 02:44:16.870196 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-28 02:44:16.870209 | orchestrator | Saturday 28 March 2026 02:44:12 +0000 (0:00:02.593) 0:00:05.159 ******** 2026-03-28 02:44:16.870223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870358 | orchestrator | 2026-03-28 02:44:16.870370 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-28 02:44:16.870381 | orchestrator | Saturday 28 March 2026 02:44:15 +0000 (0:00:02.490) 0:00:07.651 ******** 2026-03-28 02:44:16.870393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:16.870474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 02:44:32.903642 | orchestrator | 2026-03-28 02:44:32.903762 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 02:44:32.903779 | orchestrator | Saturday 28 March 2026 02:44:16 +0000 (0:00:01.448) 0:00:09.100 ******** 2026-03-28 02:44:32.903866 | orchestrator | 2026-03-28 02:44:32.903886 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 02:44:32.903902 | orchestrator | Saturday 28 March 2026 02:44:16 +0000 (0:00:00.068) 0:00:09.168 ******** 2026-03-28 02:44:32.903913 | orchestrator | 2026-03-28 02:44:32.903924 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 02:44:32.903936 | orchestrator | Saturday 28 March 2026 02:44:16 +0000 (0:00:00.065) 0:00:09.234 ******** 2026-03-28 02:44:32.903947 | orchestrator | 2026-03-28 02:44:32.903958 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-28 02:44:32.903969 | orchestrator | Saturday 28 March 2026 02:44:16 +0000 (0:00:00.064) 0:00:09.298 ******** 2026-03-28 02:44:32.903981 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:32.903993 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:44:32.904005 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:44:32.904016 | orchestrator | 2026-03-28 02:44:32.904027 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-28 02:44:32.904038 | orchestrator | Saturday 28 March 2026 02:44:24 +0000 (0:00:07.593) 0:00:16.892 ******** 2026-03-28 02:44:32.904078 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:32.904090 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:44:32.904101 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:44:32.904112 | orchestrator | 2026-03-28 02:44:32.904123 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:44:32.904135 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:32.904148 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:32.904174 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:44:32.904188 | orchestrator | 2026-03-28 02:44:32.904201 | orchestrator | 2026-03-28 02:44:32.904214 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:44:32.904227 | orchestrator | Saturday 28 March 2026 02:44:32 +0000 (0:00:08.092) 0:00:24.985 ******** 2026-03-28 02:44:32.904240 | orchestrator | =============================================================================== 2026-03-28 02:44:32.904252 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.09s 2026-03-28 02:44:32.904265 | orchestrator | redis : Restart redis container ----------------------------------------- 7.59s 2026-03-28 02:44:32.904278 | orchestrator | redis : Copying over default config.json files -------------------------- 2.59s 2026-03-28 02:44:32.904291 | orchestrator | redis : Copying over redis config files --------------------------------- 2.49s 2026-03-28 02:44:32.904304 | orchestrator | redis : Check redis containers ------------------------------------------ 1.45s 2026-03-28 02:44:32.904316 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.08s 2026-03-28 02:44:32.904329 | orchestrator | redis : include_tasks --------------------------------------------------- 0.49s 2026-03-28 02:44:32.904342 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-28 02:44:32.904354 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-28 02:44:32.904367 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-03-28 02:44:35.281934 | orchestrator | 2026-03-28 02:44:35 | INFO  | Task 8ced1a5c-3b76-40bb-bcea-7a907cb6c735 (mariadb) was prepared for execution. 2026-03-28 02:44:35.282102 | orchestrator | 2026-03-28 02:44:35 | INFO  | It takes a moment until task 8ced1a5c-3b76-40bb-bcea-7a907cb6c735 (mariadb) has been started and output is visible here. 2026-03-28 02:44:49.103899 | orchestrator | 2026-03-28 02:44:49.104004 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:44:49.104021 | orchestrator | 2026-03-28 02:44:49.104037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:44:49.104057 | orchestrator | Saturday 28 March 2026 02:44:39 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-03-28 02:44:49.104077 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:44:49.104097 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:44:49.104116 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:44:49.104137 | orchestrator | 2026-03-28 02:44:49.104154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:44:49.104166 | orchestrator | Saturday 28 March 2026 02:44:39 +0000 (0:00:00.308) 0:00:00.474 ******** 2026-03-28 02:44:49.104177 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 02:44:49.104188 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 02:44:49.104200 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 02:44:49.104210 | orchestrator | 2026-03-28 02:44:49.104221 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 02:44:49.104232 | orchestrator | 2026-03-28 02:44:49.104263 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 02:44:49.104310 | orchestrator | Saturday 28 March 2026 02:44:40 +0000 (0:00:00.574) 0:00:01.049 ******** 2026-03-28 02:44:49.104330 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 02:44:49.104349 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 02:44:49.104368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 02:44:49.104385 | orchestrator | 2026-03-28 02:44:49.104404 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 02:44:49.104423 | orchestrator | Saturday 28 March 2026 02:44:40 +0000 (0:00:00.385) 0:00:01.435 ******** 2026-03-28 02:44:49.104441 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:44:49.104457 | orchestrator | 2026-03-28 02:44:49.104475 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-28 02:44:49.104494 | orchestrator | Saturday 28 March 2026 02:44:41 +0000 (0:00:00.511) 0:00:01.946 ******** 2026-03-28 02:44:49.104540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:49.104597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:49.104632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:49.104647 | orchestrator | 2026-03-28 02:44:49.104660 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-28 02:44:49.104672 | orchestrator | Saturday 28 March 2026 02:44:43 +0000 (0:00:02.574) 0:00:04.521 ******** 2026-03-28 02:44:49.104685 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:44:49.104699 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:49.104711 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:44:49.104723 | orchestrator | 2026-03-28 02:44:49.104735 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-28 02:44:49.104749 | orchestrator | Saturday 28 March 2026 02:44:44 +0000 (0:00:00.636) 0:00:05.157 ******** 2026-03-28 02:44:49.104761 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:44:49.104774 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:44:49.104828 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:49.104851 | orchestrator | 2026-03-28 02:44:49.104869 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-28 02:44:49.104887 | orchestrator | Saturday 28 March 2026 02:44:45 +0000 (0:00:01.431) 0:00:06.589 ******** 2026-03-28 02:44:49.104924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:56.656521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:56.656615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:44:56.656649 | orchestrator | 2026-03-28 02:44:56.656662 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-28 02:44:56.656673 | orchestrator | Saturday 28 March 2026 02:44:49 +0000 (0:00:03.213) 0:00:09.802 ******** 2026-03-28 02:44:56.656682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:44:56.656692 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:44:56.656701 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:56.656710 | orchestrator | 2026-03-28 02:44:56.656719 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-28 02:44:56.656741 | orchestrator | Saturday 28 March 2026 02:44:50 +0000 (0:00:01.106) 0:00:10.909 ******** 2026-03-28 02:44:56.656751 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:44:56.656759 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:44:56.656768 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:44:56.656777 | orchestrator | 2026-03-28 02:44:56.656786 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 02:44:56.656875 | orchestrator | Saturday 28 March 2026 02:44:53 +0000 (0:00:03.724) 0:00:14.633 ******** 2026-03-28 02:44:56.656891 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:44:56.656901 | orchestrator | 2026-03-28 02:44:56.656910 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 02:44:56.656920 | orchestrator | Saturday 28 March 2026 02:44:54 +0000 (0:00:00.527) 0:00:15.160 ******** 2026-03-28 02:44:56.656945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:44:56.656972 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:44:56.657000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:01.325226 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:45:01.325366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:01.325421 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:45:01.325440 | orchestrator | 2026-03-28 02:45:01.325457 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 02:45:01.325473 | orchestrator | Saturday 28 March 2026 02:44:56 +0000 (0:00:02.194) 0:00:17.355 ******** 2026-03-28 02:45:01.325489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:01.325506 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:45:01.325550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:01.325579 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:45:01.325595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:01.325611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:45:01.325627 | orchestrator | 2026-03-28 02:45:01.325642 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 02:45:01.325656 | orchestrator | Saturday 28 March 2026 02:44:59 +0000 (0:00:02.417) 0:00:19.772 ******** 2026-03-28 02:45:01.325690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:04.013132 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:45:04.013250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:04.013273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:45:04.013307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 02:45:04.013347 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:45:04.013363 | orchestrator | 2026-03-28 02:45:04.013381 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-28 02:45:04.013398 | orchestrator | Saturday 28 March 2026 02:45:01 +0000 (0:00:02.254) 0:00:22.027 ******** 2026-03-28 02:45:04.013433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:45:04.013452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:45:04.013486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 02:47:20.444919 | orchestrator | 2026-03-28 02:47:20.445021 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-28 02:47:20.445039 | orchestrator | Saturday 28 March 2026 02:45:04 +0000 (0:00:02.683) 0:00:24.710 ******** 2026-03-28 02:47:20.445051 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:20.445063 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:47:20.445074 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:47:20.445085 | orchestrator | 2026-03-28 02:47:20.445097 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-28 02:47:20.445109 | orchestrator | Saturday 28 March 2026 02:45:04 +0000 (0:00:00.857) 0:00:25.568 ******** 2026-03-28 02:47:20.445120 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.445131 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.445142 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.445153 | orchestrator | 2026-03-28 02:47:20.445164 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-28 02:47:20.445175 | orchestrator | Saturday 28 March 2026 02:45:05 +0000 (0:00:00.559) 0:00:26.127 ******** 2026-03-28 02:47:20.445186 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.445197 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.445208 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.445219 | orchestrator | 2026-03-28 02:47:20.445230 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-28 02:47:20.445241 | orchestrator | Saturday 28 March 2026 02:45:05 +0000 (0:00:00.308) 0:00:26.436 ******** 2026-03-28 02:47:20.445253 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-28 02:47:20.445265 | orchestrator | ...ignoring 2026-03-28 02:47:20.445277 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-28 02:47:20.445289 | orchestrator | ...ignoring 2026-03-28 02:47:20.445300 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-28 02:47:20.445311 | orchestrator | ...ignoring 2026-03-28 02:47:20.445345 | orchestrator | 2026-03-28 02:47:20.445357 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-28 02:47:20.445369 | orchestrator | Saturday 28 March 2026 02:45:16 +0000 (0:00:10.875) 0:00:37.311 ******** 2026-03-28 02:47:20.445380 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.445391 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.445402 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.445413 | orchestrator | 2026-03-28 02:47:20.445424 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-28 02:47:20.445436 | orchestrator | Saturday 28 March 2026 02:45:16 +0000 (0:00:00.397) 0:00:37.709 ******** 2026-03-28 02:47:20.445446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.445459 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.445471 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.445484 | orchestrator | 2026-03-28 02:47:20.445497 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-28 02:47:20.445510 | orchestrator | Saturday 28 March 2026 02:45:17 +0000 (0:00:00.677) 0:00:38.386 ******** 2026-03-28 02:47:20.445523 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.445535 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.445547 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.445561 | orchestrator | 2026-03-28 02:47:20.445586 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-28 02:47:20.445601 | orchestrator | Saturday 28 March 2026 02:45:18 +0000 (0:00:00.540) 0:00:38.927 ******** 2026-03-28 02:47:20.445614 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.445626 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.445639 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.445651 | orchestrator | 2026-03-28 02:47:20.445664 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-28 02:47:20.445677 | orchestrator | Saturday 28 March 2026 02:45:18 +0000 (0:00:00.459) 0:00:39.386 ******** 2026-03-28 02:47:20.445690 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.445702 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.445715 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.445728 | orchestrator | 2026-03-28 02:47:20.445741 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-28 02:47:20.445754 | orchestrator | Saturday 28 March 2026 02:45:19 +0000 (0:00:00.460) 0:00:39.847 ******** 2026-03-28 02:47:20.445766 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.445779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.445791 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.445805 | orchestrator | 2026-03-28 02:47:20.445817 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 02:47:20.445828 | orchestrator | Saturday 28 March 2026 02:45:19 +0000 (0:00:00.687) 0:00:40.534 ******** 2026-03-28 02:47:20.445858 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.445869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.445880 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-28 02:47:20.445891 | orchestrator | 2026-03-28 02:47:20.445902 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-28 02:47:20.445913 | orchestrator | Saturday 28 March 2026 02:45:20 +0000 (0:00:00.396) 0:00:40.931 ******** 2026-03-28 02:47:20.445924 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:20.445935 | orchestrator | 2026-03-28 02:47:20.445946 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-28 02:47:20.445957 | orchestrator | Saturday 28 March 2026 02:45:30 +0000 (0:00:10.260) 0:00:51.191 ******** 2026-03-28 02:47:20.445967 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.445978 | orchestrator | 2026-03-28 02:47:20.445990 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 02:47:20.446001 | orchestrator | Saturday 28 March 2026 02:45:30 +0000 (0:00:00.143) 0:00:51.334 ******** 2026-03-28 02:47:20.446012 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.446127 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.446151 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.446171 | orchestrator | 2026-03-28 02:47:20.446191 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-28 02:47:20.446210 | orchestrator | Saturday 28 March 2026 02:45:31 +0000 (0:00:00.968) 0:00:52.303 ******** 2026-03-28 02:47:20.446229 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:20.446242 | orchestrator | 2026-03-28 02:47:20.446253 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-28 02:47:20.446264 | orchestrator | Saturday 28 March 2026 02:45:39 +0000 (0:00:07.765) 0:01:00.069 ******** 2026-03-28 02:47:20.446275 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.446286 | orchestrator | 2026-03-28 02:47:20.446297 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-28 02:47:20.446307 | orchestrator | Saturday 28 March 2026 02:45:41 +0000 (0:00:02.567) 0:01:02.636 ******** 2026-03-28 02:47:20.446318 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.446329 | orchestrator | 2026-03-28 02:47:20.446340 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-28 02:47:20.446350 | orchestrator | Saturday 28 March 2026 02:45:44 +0000 (0:00:02.487) 0:01:05.123 ******** 2026-03-28 02:47:20.446361 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:20.446372 | orchestrator | 2026-03-28 02:47:20.446383 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-28 02:47:20.446393 | orchestrator | Saturday 28 March 2026 02:45:44 +0000 (0:00:00.122) 0:01:05.246 ******** 2026-03-28 02:47:20.446404 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.446415 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:20.446425 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:20.446436 | orchestrator | 2026-03-28 02:47:20.446447 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-28 02:47:20.446458 | orchestrator | Saturday 28 March 2026 02:45:44 +0000 (0:00:00.343) 0:01:05.589 ******** 2026-03-28 02:47:20.446469 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:20.446480 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 02:47:20.446491 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:47:20.446501 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:47:20.446512 | orchestrator | 2026-03-28 02:47:20.446523 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 02:47:20.446533 | orchestrator | skipping: no hosts matched 2026-03-28 02:47:20.446544 | orchestrator | 2026-03-28 02:47:20.446555 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 02:47:20.446566 | orchestrator | 2026-03-28 02:47:20.446577 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 02:47:20.446587 | orchestrator | Saturday 28 March 2026 02:45:45 +0000 (0:00:00.555) 0:01:06.145 ******** 2026-03-28 02:47:20.446598 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:47:20.446609 | orchestrator | 2026-03-28 02:47:20.446619 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 02:47:20.446630 | orchestrator | Saturday 28 March 2026 02:46:08 +0000 (0:00:23.131) 0:01:29.277 ******** 2026-03-28 02:47:20.446641 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.446651 | orchestrator | 2026-03-28 02:47:20.446662 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 02:47:20.446673 | orchestrator | Saturday 28 March 2026 02:46:20 +0000 (0:00:11.554) 0:01:40.831 ******** 2026-03-28 02:47:20.446684 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:20.446694 | orchestrator | 2026-03-28 02:47:20.446710 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 02:47:20.446721 | orchestrator | 2026-03-28 02:47:20.446747 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 02:47:20.446759 | orchestrator | Saturday 28 March 2026 02:46:22 +0000 (0:00:02.413) 0:01:43.245 ******** 2026-03-28 02:47:20.446778 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:47:20.446789 | orchestrator | 2026-03-28 02:47:20.446800 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 02:47:20.446811 | orchestrator | Saturday 28 March 2026 02:46:40 +0000 (0:00:17.655) 0:02:00.901 ******** 2026-03-28 02:47:20.446822 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.446833 | orchestrator | 2026-03-28 02:47:20.446881 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 02:47:20.446893 | orchestrator | Saturday 28 March 2026 02:46:56 +0000 (0:00:16.669) 0:02:17.570 ******** 2026-03-28 02:47:20.446904 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:20.446915 | orchestrator | 2026-03-28 02:47:20.446926 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 02:47:20.446937 | orchestrator | 2026-03-28 02:47:20.446948 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 02:47:20.446959 | orchestrator | Saturday 28 March 2026 02:46:59 +0000 (0:00:02.465) 0:02:20.036 ******** 2026-03-28 02:47:20.446970 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:20.446980 | orchestrator | 2026-03-28 02:47:20.446991 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 02:47:20.447002 | orchestrator | Saturday 28 March 2026 02:47:11 +0000 (0:00:12.623) 0:02:32.659 ******** 2026-03-28 02:47:20.447013 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.447023 | orchestrator | 2026-03-28 02:47:20.447034 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 02:47:20.447045 | orchestrator | Saturday 28 March 2026 02:47:17 +0000 (0:00:05.567) 0:02:38.226 ******** 2026-03-28 02:47:20.447056 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:20.447067 | orchestrator | 2026-03-28 02:47:20.447078 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 02:47:20.447088 | orchestrator | 2026-03-28 02:47:20.447099 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 02:47:20.447110 | orchestrator | Saturday 28 March 2026 02:47:19 +0000 (0:00:02.417) 0:02:40.644 ******** 2026-03-28 02:47:20.447126 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:47:20.447146 | orchestrator | 2026-03-28 02:47:20.447165 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-28 02:47:20.447197 | orchestrator | Saturday 28 March 2026 02:47:20 +0000 (0:00:00.497) 0:02:41.141 ******** 2026-03-28 02:47:32.500659 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:32.500734 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:32.500743 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:32.500751 | orchestrator | 2026-03-28 02:47:32.500760 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-28 02:47:32.500768 | orchestrator | Saturday 28 March 2026 02:47:22 +0000 (0:00:02.224) 0:02:43.366 ******** 2026-03-28 02:47:32.500775 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:32.500782 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:32.500789 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:32.500797 | orchestrator | 2026-03-28 02:47:32.500804 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-28 02:47:32.500811 | orchestrator | Saturday 28 March 2026 02:47:24 +0000 (0:00:02.094) 0:02:45.460 ******** 2026-03-28 02:47:32.500819 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:32.500826 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:32.500833 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:32.500873 | orchestrator | 2026-03-28 02:47:32.500882 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-28 02:47:32.500890 | orchestrator | Saturday 28 March 2026 02:47:27 +0000 (0:00:02.323) 0:02:47.784 ******** 2026-03-28 02:47:32.500897 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:32.500904 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:32.500911 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:47:32.500919 | orchestrator | 2026-03-28 02:47:32.500944 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 02:47:32.500952 | orchestrator | Saturday 28 March 2026 02:47:29 +0000 (0:00:02.089) 0:02:49.873 ******** 2026-03-28 02:47:32.500959 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:32.500967 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:32.500974 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:32.500981 | orchestrator | 2026-03-28 02:47:32.500989 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 02:47:32.500996 | orchestrator | Saturday 28 March 2026 02:47:31 +0000 (0:00:02.779) 0:02:52.653 ******** 2026-03-28 02:47:32.501004 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:32.501011 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:47:32.501018 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:47:32.501025 | orchestrator | 2026-03-28 02:47:32.501033 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:47:32.501040 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-28 02:47:32.501049 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 02:47:32.501056 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-28 02:47:32.501063 | orchestrator | 2026-03-28 02:47:32.501070 | orchestrator | 2026-03-28 02:47:32.501078 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:47:32.501085 | orchestrator | Saturday 28 March 2026 02:47:32 +0000 (0:00:00.325) 0:02:52.978 ******** 2026-03-28 02:47:32.501092 | orchestrator | =============================================================================== 2026-03-28 02:47:32.501109 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.79s 2026-03-28 02:47:32.501117 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.22s 2026-03-28 02:47:32.501124 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.62s 2026-03-28 02:47:32.501131 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2026-03-28 02:47:32.501138 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.26s 2026-03-28 02:47:32.501146 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.77s 2026-03-28 02:47:32.501153 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2026-03-28 02:47:32.501160 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2026-03-28 02:47:32.501167 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.72s 2026-03-28 02:47:32.501175 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.21s 2026-03-28 02:47:32.501182 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.78s 2026-03-28 02:47:32.501189 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.68s 2026-03-28 02:47:32.501196 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.57s 2026-03-28 02:47:32.501204 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.57s 2026-03-28 02:47:32.501211 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.49s 2026-03-28 02:47:32.501223 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.42s 2026-03-28 02:47:32.501237 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.42s 2026-03-28 02:47:32.501249 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.32s 2026-03-28 02:47:32.501262 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.25s 2026-03-28 02:47:32.501274 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.22s 2026-03-28 02:47:34.503224 | orchestrator | 2026-03-28 02:47:34 | INFO  | Task b69bc066-6877-46e6-ae49-a216865ab401 (rabbitmq) was prepared for execution. 2026-03-28 02:47:34.504223 | orchestrator | 2026-03-28 02:47:34 | INFO  | It takes a moment until task b69bc066-6877-46e6-ae49-a216865ab401 (rabbitmq) has been started and output is visible here. 2026-03-28 02:47:46.851591 | orchestrator | 2026-03-28 02:47:46.851727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:47:46.851755 | orchestrator | 2026-03-28 02:47:46.851773 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:47:46.851793 | orchestrator | Saturday 28 March 2026 02:47:38 +0000 (0:00:00.201) 0:00:00.201 ******** 2026-03-28 02:47:46.851813 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:46.851833 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:47:46.851922 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:47:46.851936 | orchestrator | 2026-03-28 02:47:46.851947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:47:46.851959 | orchestrator | Saturday 28 March 2026 02:47:38 +0000 (0:00:00.279) 0:00:00.481 ******** 2026-03-28 02:47:46.851970 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-28 02:47:46.851982 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-28 02:47:46.851993 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-28 02:47:46.852005 | orchestrator | 2026-03-28 02:47:46.852016 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-28 02:47:46.852028 | orchestrator | 2026-03-28 02:47:46.852039 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 02:47:46.852050 | orchestrator | Saturday 28 March 2026 02:47:38 +0000 (0:00:00.462) 0:00:00.944 ******** 2026-03-28 02:47:46.852062 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:47:46.852075 | orchestrator | 2026-03-28 02:47:46.852086 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 02:47:46.852097 | orchestrator | Saturday 28 March 2026 02:47:39 +0000 (0:00:00.505) 0:00:01.450 ******** 2026-03-28 02:47:46.852108 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:46.852120 | orchestrator | 2026-03-28 02:47:46.852132 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-28 02:47:46.852145 | orchestrator | Saturday 28 March 2026 02:47:40 +0000 (0:00:00.923) 0:00:02.373 ******** 2026-03-28 02:47:46.852158 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852171 | orchestrator | 2026-03-28 02:47:46.852184 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-28 02:47:46.852196 | orchestrator | Saturday 28 March 2026 02:47:40 +0000 (0:00:00.369) 0:00:02.743 ******** 2026-03-28 02:47:46.852207 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852218 | orchestrator | 2026-03-28 02:47:46.852229 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-28 02:47:46.852241 | orchestrator | Saturday 28 March 2026 02:47:41 +0000 (0:00:00.448) 0:00:03.191 ******** 2026-03-28 02:47:46.852251 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852263 | orchestrator | 2026-03-28 02:47:46.852274 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-28 02:47:46.852285 | orchestrator | Saturday 28 March 2026 02:47:41 +0000 (0:00:00.360) 0:00:03.552 ******** 2026-03-28 02:47:46.852296 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852307 | orchestrator | 2026-03-28 02:47:46.852318 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 02:47:46.852329 | orchestrator | Saturday 28 March 2026 02:47:42 +0000 (0:00:00.490) 0:00:04.042 ******** 2026-03-28 02:47:46.852358 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:47:46.852398 | orchestrator | 2026-03-28 02:47:46.852410 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 02:47:46.852421 | orchestrator | Saturday 28 March 2026 02:47:42 +0000 (0:00:00.734) 0:00:04.776 ******** 2026-03-28 02:47:46.852432 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:47:46.852443 | orchestrator | 2026-03-28 02:47:46.852454 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-28 02:47:46.852465 | orchestrator | Saturday 28 March 2026 02:47:43 +0000 (0:00:00.843) 0:00:05.620 ******** 2026-03-28 02:47:46.852476 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852487 | orchestrator | 2026-03-28 02:47:46.852498 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-28 02:47:46.852509 | orchestrator | Saturday 28 March 2026 02:47:43 +0000 (0:00:00.331) 0:00:05.951 ******** 2026-03-28 02:47:46.852520 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:47:46.852531 | orchestrator | 2026-03-28 02:47:46.852541 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-28 02:47:46.852552 | orchestrator | Saturday 28 March 2026 02:47:44 +0000 (0:00:00.355) 0:00:06.307 ******** 2026-03-28 02:47:46.852590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:47:46.852607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:47:46.852621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:47:46.852641 | orchestrator | 2026-03-28 02:47:46.852658 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-28 02:47:46.852669 | orchestrator | Saturday 28 March 2026 02:47:45 +0000 (0:00:00.846) 0:00:07.154 ******** 2026-03-28 02:47:46.852682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:47:46.852703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:48:05.341214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:48:05.341332 | orchestrator | 2026-03-28 02:48:05.341349 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-28 02:48:05.341363 | orchestrator | Saturday 28 March 2026 02:47:46 +0000 (0:00:01.650) 0:00:08.804 ******** 2026-03-28 02:48:05.341402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 02:48:05.341415 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 02:48:05.341426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 02:48:05.341437 | orchestrator | 2026-03-28 02:48:05.341449 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-28 02:48:05.341460 | orchestrator | Saturday 28 March 2026 02:47:48 +0000 (0:00:01.554) 0:00:10.359 ******** 2026-03-28 02:48:05.341487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 02:48:05.341499 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 02:48:05.341511 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 02:48:05.341522 | orchestrator | 2026-03-28 02:48:05.341533 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-28 02:48:05.341544 | orchestrator | Saturday 28 March 2026 02:47:50 +0000 (0:00:01.856) 0:00:12.215 ******** 2026-03-28 02:48:05.341555 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 02:48:05.341566 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 02:48:05.341577 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 02:48:05.341588 | orchestrator | 2026-03-28 02:48:05.341599 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-28 02:48:05.341610 | orchestrator | Saturday 28 March 2026 02:47:51 +0000 (0:00:01.327) 0:00:13.543 ******** 2026-03-28 02:48:05.341621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 02:48:05.341633 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 02:48:05.341644 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 02:48:05.341655 | orchestrator | 2026-03-28 02:48:05.341666 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-28 02:48:05.341677 | orchestrator | Saturday 28 March 2026 02:47:53 +0000 (0:00:01.640) 0:00:15.184 ******** 2026-03-28 02:48:05.341688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 02:48:05.341704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 02:48:05.341724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 02:48:05.341750 | orchestrator | 2026-03-28 02:48:05.341774 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-28 02:48:05.341793 | orchestrator | Saturday 28 March 2026 02:47:54 +0000 (0:00:01.490) 0:00:16.674 ******** 2026-03-28 02:48:05.341811 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 02:48:05.341829 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 02:48:05.341847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 02:48:05.341902 | orchestrator | 2026-03-28 02:48:05.341922 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 02:48:05.341941 | orchestrator | Saturday 28 March 2026 02:47:56 +0000 (0:00:01.320) 0:00:17.995 ******** 2026-03-28 02:48:05.341959 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:48:05.341979 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:48:05.342102 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:48:05.342138 | orchestrator | 2026-03-28 02:48:05.342150 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-28 02:48:05.342161 | orchestrator | Saturday 28 March 2026 02:47:56 +0000 (0:00:00.401) 0:00:18.397 ******** 2026-03-28 02:48:05.342175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:48:05.342197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:48:05.342211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 02:48:05.342223 | orchestrator | 2026-03-28 02:48:05.342234 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-28 02:48:05.342245 | orchestrator | Saturday 28 March 2026 02:47:57 +0000 (0:00:01.182) 0:00:19.579 ******** 2026-03-28 02:48:05.342256 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:48:05.342267 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:48:05.342278 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:48:05.342289 | orchestrator | 2026-03-28 02:48:05.342300 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-28 02:48:05.342319 | orchestrator | Saturday 28 March 2026 02:47:58 +0000 (0:00:00.832) 0:00:20.412 ******** 2026-03-28 02:48:05.342330 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:48:05.342341 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:48:05.342352 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:48:05.342363 | orchestrator | 2026-03-28 02:48:05.342374 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-28 02:48:05.342394 | orchestrator | Saturday 28 March 2026 02:48:05 +0000 (0:00:06.875) 0:00:27.287 ******** 2026-03-28 02:49:41.656099 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:49:41.656224 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:49:41.656240 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:49:41.656252 | orchestrator | 2026-03-28 02:49:41.656266 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 02:49:41.656277 | orchestrator | 2026-03-28 02:49:41.656288 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 02:49:41.656300 | orchestrator | Saturday 28 March 2026 02:48:05 +0000 (0:00:00.535) 0:00:27.822 ******** 2026-03-28 02:49:41.656311 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:49:41.656323 | orchestrator | 2026-03-28 02:49:41.656334 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 02:49:41.656346 | orchestrator | Saturday 28 March 2026 02:48:06 +0000 (0:00:00.647) 0:00:28.470 ******** 2026-03-28 02:49:41.656357 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:49:41.656368 | orchestrator | 2026-03-28 02:49:41.656379 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 02:49:41.656390 | orchestrator | Saturday 28 March 2026 02:48:06 +0000 (0:00:00.259) 0:00:28.730 ******** 2026-03-28 02:49:41.656400 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:49:41.656411 | orchestrator | 2026-03-28 02:49:41.656422 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 02:49:41.656433 | orchestrator | Saturday 28 March 2026 02:48:13 +0000 (0:00:06.659) 0:00:35.389 ******** 2026-03-28 02:49:41.656444 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:49:41.656455 | orchestrator | 2026-03-28 02:49:41.656467 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 02:49:41.656477 | orchestrator | 2026-03-28 02:49:41.656488 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 02:49:41.656499 | orchestrator | Saturday 28 March 2026 02:49:03 +0000 (0:00:49.734) 0:01:25.123 ******** 2026-03-28 02:49:41.656510 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:49:41.656521 | orchestrator | 2026-03-28 02:49:41.656534 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 02:49:41.656547 | orchestrator | Saturday 28 March 2026 02:49:03 +0000 (0:00:00.606) 0:01:25.730 ******** 2026-03-28 02:49:41.656559 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:49:41.656572 | orchestrator | 2026-03-28 02:49:41.656584 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 02:49:41.656597 | orchestrator | Saturday 28 March 2026 02:49:04 +0000 (0:00:00.247) 0:01:25.978 ******** 2026-03-28 02:49:41.656610 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:49:41.656623 | orchestrator | 2026-03-28 02:49:41.656635 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 02:49:41.656665 | orchestrator | Saturday 28 March 2026 02:49:05 +0000 (0:00:01.493) 0:01:27.471 ******** 2026-03-28 02:49:41.656678 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:49:41.656691 | orchestrator | 2026-03-28 02:49:41.656703 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 02:49:41.656716 | orchestrator | 2026-03-28 02:49:41.656728 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 02:49:41.656741 | orchestrator | Saturday 28 March 2026 02:49:19 +0000 (0:00:14.132) 0:01:41.604 ******** 2026-03-28 02:49:41.656753 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:49:41.656765 | orchestrator | 2026-03-28 02:49:41.656801 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 02:49:41.656815 | orchestrator | Saturday 28 March 2026 02:49:20 +0000 (0:00:00.817) 0:01:42.421 ******** 2026-03-28 02:49:41.656827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:49:41.656840 | orchestrator | 2026-03-28 02:49:41.656851 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 02:49:41.656862 | orchestrator | Saturday 28 March 2026 02:49:20 +0000 (0:00:00.243) 0:01:42.665 ******** 2026-03-28 02:49:41.656873 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:49:41.656885 | orchestrator | 2026-03-28 02:49:41.656933 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 02:49:41.656952 | orchestrator | Saturday 28 March 2026 02:49:27 +0000 (0:00:06.687) 0:01:49.352 ******** 2026-03-28 02:49:41.656971 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:49:41.656988 | orchestrator | 2026-03-28 02:49:41.657007 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-28 02:49:41.657025 | orchestrator | 2026-03-28 02:49:41.657043 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-28 02:49:41.657060 | orchestrator | Saturday 28 March 2026 02:49:38 +0000 (0:00:10.952) 0:02:00.304 ******** 2026-03-28 02:49:41.657079 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:49:41.657096 | orchestrator | 2026-03-28 02:49:41.657115 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 02:49:41.657132 | orchestrator | Saturday 28 March 2026 02:49:38 +0000 (0:00:00.535) 0:02:00.840 ******** 2026-03-28 02:49:41.657151 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 02:49:41.657169 | orchestrator | enable_outward_rabbitmq_True 2026-03-28 02:49:41.657189 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 02:49:41.657209 | orchestrator | outward_rabbitmq_restart 2026-03-28 02:49:41.657231 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:49:41.657250 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:49:41.657267 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:49:41.657285 | orchestrator | 2026-03-28 02:49:41.657302 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-28 02:49:41.657322 | orchestrator | skipping: no hosts matched 2026-03-28 02:49:41.657340 | orchestrator | 2026-03-28 02:49:41.657358 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-28 02:49:41.657377 | orchestrator | skipping: no hosts matched 2026-03-28 02:49:41.657394 | orchestrator | 2026-03-28 02:49:41.657413 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-28 02:49:41.657432 | orchestrator | skipping: no hosts matched 2026-03-28 02:49:41.657450 | orchestrator | 2026-03-28 02:49:41.657469 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:49:41.657611 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 02:49:41.657634 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:49:41.657652 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:49:41.657670 | orchestrator | 2026-03-28 02:49:41.657689 | orchestrator | 2026-03-28 02:49:41.657706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:49:41.657725 | orchestrator | Saturday 28 March 2026 02:49:41 +0000 (0:00:02.417) 0:02:03.257 ******** 2026-03-28 02:49:41.657745 | orchestrator | =============================================================================== 2026-03-28 02:49:41.657767 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.82s 2026-03-28 02:49:41.657788 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.84s 2026-03-28 02:49:41.657829 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.88s 2026-03-28 02:49:41.657851 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2026-03-28 02:49:41.657870 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.07s 2026-03-28 02:49:41.657921 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.86s 2026-03-28 02:49:41.657937 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2026-03-28 02:49:41.657948 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.64s 2026-03-28 02:49:41.657959 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.55s 2026-03-28 02:49:41.657969 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.49s 2026-03-28 02:49:41.657980 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-03-28 02:49:41.657991 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.32s 2026-03-28 02:49:41.658002 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.18s 2026-03-28 02:49:41.658074 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-03-28 02:49:41.658098 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.85s 2026-03-28 02:49:41.658109 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.84s 2026-03-28 02:49:41.658120 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.83s 2026-03-28 02:49:41.658131 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-03-28 02:49:41.658142 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.73s 2026-03-28 02:49:41.658190 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 0.54s 2026-03-28 02:49:44.113500 | orchestrator | 2026-03-28 02:49:44 | INFO  | Task 432991d7-0e30-4b97-b622-0199b9fa08f4 (openvswitch) was prepared for execution. 2026-03-28 02:49:44.113573 | orchestrator | 2026-03-28 02:49:44 | INFO  | It takes a moment until task 432991d7-0e30-4b97-b622-0199b9fa08f4 (openvswitch) has been started and output is visible here. 2026-03-28 02:49:57.542354 | orchestrator | 2026-03-28 02:49:57.542448 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:49:57.542460 | orchestrator | 2026-03-28 02:49:57.542468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:49:57.542476 | orchestrator | Saturday 28 March 2026 02:49:48 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-28 02:49:57.542483 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:49:57.542492 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:49:57.542499 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:49:57.542506 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:49:57.542514 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:49:57.542521 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:49:57.542528 | orchestrator | 2026-03-28 02:49:57.542535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:49:57.542543 | orchestrator | Saturday 28 March 2026 02:49:49 +0000 (0:00:00.854) 0:00:01.115 ******** 2026-03-28 02:49:57.542550 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542558 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542566 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542573 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542580 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542588 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 02:49:57.542600 | orchestrator | 2026-03-28 02:49:57.542641 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-28 02:49:57.542657 | orchestrator | 2026-03-28 02:49:57.542669 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-28 02:49:57.542681 | orchestrator | Saturday 28 March 2026 02:49:50 +0000 (0:00:00.698) 0:00:01.813 ******** 2026-03-28 02:49:57.542694 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:49:57.542707 | orchestrator | 2026-03-28 02:49:57.542718 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 02:49:57.542729 | orchestrator | Saturday 28 March 2026 02:49:51 +0000 (0:00:01.205) 0:00:03.019 ******** 2026-03-28 02:49:57.542740 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 02:49:57.542752 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 02:49:57.542765 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 02:49:57.542776 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 02:49:57.542787 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 02:49:57.542798 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 02:49:57.542809 | orchestrator | 2026-03-28 02:49:57.542821 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 02:49:57.542834 | orchestrator | Saturday 28 March 2026 02:49:52 +0000 (0:00:01.206) 0:00:04.226 ******** 2026-03-28 02:49:57.542847 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 02:49:57.542859 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 02:49:57.542871 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 02:49:57.542921 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 02:49:57.542935 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 02:49:57.542948 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 02:49:57.542959 | orchestrator | 2026-03-28 02:49:57.542971 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 02:49:57.542983 | orchestrator | Saturday 28 March 2026 02:49:54 +0000 (0:00:01.509) 0:00:05.735 ******** 2026-03-28 02:49:57.542992 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-28 02:49:57.543001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:49:57.543010 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-28 02:49:57.543019 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:49:57.543027 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-28 02:49:57.543036 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:49:57.543044 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-28 02:49:57.543052 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:49:57.543061 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-28 02:49:57.543069 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:49:57.543078 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-28 02:49:57.543086 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:49:57.543095 | orchestrator | 2026-03-28 02:49:57.543104 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-28 02:49:57.543112 | orchestrator | Saturday 28 March 2026 02:49:55 +0000 (0:00:01.237) 0:00:06.973 ******** 2026-03-28 02:49:57.543120 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:49:57.543128 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:49:57.543137 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:49:57.543145 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:49:57.543153 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:49:57.543161 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:49:57.543170 | orchestrator | 2026-03-28 02:49:57.543178 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-28 02:49:57.543196 | orchestrator | Saturday 28 March 2026 02:49:56 +0000 (0:00:00.795) 0:00:07.768 ******** 2026-03-28 02:49:57.543223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:57.543235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:57.543243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:57.543284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:57.543296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:57.543310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927758 | orchestrator | 2026-03-28 02:49:59.927764 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-28 02:49:59.927770 | orchestrator | Saturday 28 March 2026 02:49:57 +0000 (0:00:01.478) 0:00:09.247 ******** 2026-03-28 02:49:59.927775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:49:59.927813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806382 | orchestrator | 2026-03-28 02:50:02.806392 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-28 02:50:02.806403 | orchestrator | Saturday 28 March 2026 02:49:59 +0000 (0:00:02.379) 0:00:11.627 ******** 2026-03-28 02:50:02.806411 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:50:02.806420 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:50:02.806429 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:50:02.806438 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:50:02.806446 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:50:02.806454 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:50:02.806462 | orchestrator | 2026-03-28 02:50:02.806468 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-28 02:50:02.806473 | orchestrator | Saturday 28 March 2026 02:50:01 +0000 (0:00:01.034) 0:00:12.661 ******** 2026-03-28 02:50:02.806478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:02.806520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 02:50:28.549825 | orchestrator | 2026-03-28 02:50:28.549839 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.549852 | orchestrator | Saturday 28 March 2026 02:50:02 +0000 (0:00:01.851) 0:00:14.513 ******** 2026-03-28 02:50:28.549863 | orchestrator | 2026-03-28 02:50:28.549874 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.549885 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.331) 0:00:14.844 ******** 2026-03-28 02:50:28.549931 | orchestrator | 2026-03-28 02:50:28.549944 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.549955 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.152) 0:00:14.996 ******** 2026-03-28 02:50:28.549966 | orchestrator | 2026-03-28 02:50:28.549977 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.549987 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.145) 0:00:15.142 ******** 2026-03-28 02:50:28.549998 | orchestrator | 2026-03-28 02:50:28.550009 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.550082 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.132) 0:00:15.275 ******** 2026-03-28 02:50:28.550101 | orchestrator | 2026-03-28 02:50:28.550120 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 02:50:28.550138 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.131) 0:00:15.407 ******** 2026-03-28 02:50:28.550155 | orchestrator | 2026-03-28 02:50:28.550172 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-28 02:50:28.550190 | orchestrator | Saturday 28 March 2026 02:50:03 +0000 (0:00:00.136) 0:00:15.544 ******** 2026-03-28 02:50:28.550209 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:50:28.550229 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:50:28.550247 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:50:28.550263 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:50:28.550280 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:50:28.550298 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:50:28.550317 | orchestrator | 2026-03-28 02:50:28.550336 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-28 02:50:28.550357 | orchestrator | Saturday 28 March 2026 02:50:13 +0000 (0:00:09.145) 0:00:24.689 ******** 2026-03-28 02:50:28.550388 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:50:28.550405 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:50:28.550419 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:50:28.550431 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:50:28.550444 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:50:28.550454 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:50:28.550465 | orchestrator | 2026-03-28 02:50:28.550476 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 02:50:28.550488 | orchestrator | Saturday 28 March 2026 02:50:14 +0000 (0:00:01.075) 0:00:25.765 ******** 2026-03-28 02:50:28.550499 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:50:28.550510 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:50:28.550521 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:50:28.550532 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:50:28.550543 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:50:28.550553 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:50:28.550566 | orchestrator | 2026-03-28 02:50:28.550585 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-28 02:50:28.550602 | orchestrator | Saturday 28 March 2026 02:50:22 +0000 (0:00:07.992) 0:00:33.758 ******** 2026-03-28 02:50:28.550620 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-28 02:50:28.550638 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-28 02:50:28.550655 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-28 02:50:28.550671 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-28 02:50:28.550688 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-28 02:50:28.550705 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-28 02:50:28.550724 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-28 02:50:28.550771 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-28 02:50:41.722265 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-28 02:50:41.722382 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-28 02:50:41.722398 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-28 02:50:41.722409 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-28 02:50:41.722421 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722433 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722443 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722454 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722465 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722476 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 02:50:41.722487 | orchestrator | 2026-03-28 02:50:41.722500 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-28 02:50:41.722513 | orchestrator | Saturday 28 March 2026 02:50:28 +0000 (0:00:06.403) 0:00:40.161 ******** 2026-03-28 02:50:41.722525 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-28 02:50:41.722537 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:50:41.722549 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-28 02:50:41.722560 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:50:41.722570 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-28 02:50:41.722580 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:50:41.722591 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-28 02:50:41.722602 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-28 02:50:41.722612 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-28 02:50:41.722623 | orchestrator | 2026-03-28 02:50:41.722633 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-28 02:50:41.722644 | orchestrator | Saturday 28 March 2026 02:50:30 +0000 (0:00:02.401) 0:00:42.562 ******** 2026-03-28 02:50:41.722655 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-28 02:50:41.722665 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:50:41.722687 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-28 02:50:41.722697 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:50:41.722708 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-28 02:50:41.722719 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:50:41.722730 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-28 02:50:41.722741 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-28 02:50:41.722770 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-28 02:50:41.722781 | orchestrator | 2026-03-28 02:50:41.722792 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 02:50:41.722804 | orchestrator | Saturday 28 March 2026 02:50:34 +0000 (0:00:03.120) 0:00:45.683 ******** 2026-03-28 02:50:41.722816 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:50:41.722829 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:50:41.722870 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:50:41.722882 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:50:41.722893 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:50:41.722904 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:50:41.722957 | orchestrator | 2026-03-28 02:50:41.722969 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:50:41.722983 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:50:41.722996 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:50:41.723008 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 02:50:41.723019 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 02:50:41.723031 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 02:50:41.723043 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 02:50:41.723054 | orchestrator | 2026-03-28 02:50:41.723066 | orchestrator | 2026-03-28 02:50:41.723078 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:50:41.723090 | orchestrator | Saturday 28 March 2026 02:50:41 +0000 (0:00:07.229) 0:00:52.912 ******** 2026-03-28 02:50:41.723121 | orchestrator | =============================================================================== 2026-03-28 02:50:41.723133 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.22s 2026-03-28 02:50:41.723145 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.15s 2026-03-28 02:50:41.723156 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.40s 2026-03-28 02:50:41.723169 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.12s 2026-03-28 02:50:41.723180 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2026-03-28 02:50:41.723192 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.38s 2026-03-28 02:50:41.723203 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.85s 2026-03-28 02:50:41.723213 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.51s 2026-03-28 02:50:41.723224 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.48s 2026-03-28 02:50:41.723235 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-03-28 02:50:41.723246 | orchestrator | module-load : Load modules ---------------------------------------------- 1.21s 2026-03-28 02:50:41.723256 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-03-28 02:50:41.723267 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.08s 2026-03-28 02:50:41.723278 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.03s 2026-03-28 02:50:41.723289 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.03s 2026-03-28 02:50:41.723299 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2026-03-28 02:50:41.723310 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2026-03-28 02:50:41.723321 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-28 02:50:44.119057 | orchestrator | 2026-03-28 02:50:44 | INFO  | Task 32122b03-723f-4a63-abf5-1161222fed62 (ovn) was prepared for execution. 2026-03-28 02:50:44.119310 | orchestrator | 2026-03-28 02:50:44 | INFO  | It takes a moment until task 32122b03-723f-4a63-abf5-1161222fed62 (ovn) has been started and output is visible here. 2026-03-28 02:50:54.576641 | orchestrator | 2026-03-28 02:50:54.576754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 02:50:54.576770 | orchestrator | 2026-03-28 02:50:54.576782 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 02:50:54.576793 | orchestrator | Saturday 28 March 2026 02:50:48 +0000 (0:00:00.159) 0:00:00.159 ******** 2026-03-28 02:50:54.576805 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:50:54.576817 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:50:54.576828 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:50:54.576839 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:50:54.576850 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:50:54.576861 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:50:54.576872 | orchestrator | 2026-03-28 02:50:54.576883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 02:50:54.576895 | orchestrator | Saturday 28 March 2026 02:50:48 +0000 (0:00:00.601) 0:00:00.761 ******** 2026-03-28 02:50:54.576970 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-28 02:50:54.576985 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-28 02:50:54.576996 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-28 02:50:54.577007 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-28 02:50:54.577018 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-28 02:50:54.577029 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-28 02:50:54.577040 | orchestrator | 2026-03-28 02:50:54.577059 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-28 02:50:54.577084 | orchestrator | 2026-03-28 02:50:54.577111 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-28 02:50:54.577131 | orchestrator | Saturday 28 March 2026 02:50:49 +0000 (0:00:00.831) 0:00:01.592 ******** 2026-03-28 02:50:54.577151 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:50:54.577171 | orchestrator | 2026-03-28 02:50:54.577190 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-28 02:50:54.577211 | orchestrator | Saturday 28 March 2026 02:50:50 +0000 (0:00:01.018) 0:00:02.611 ******** 2026-03-28 02:50:54.577234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577376 | orchestrator | 2026-03-28 02:50:54.577388 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-28 02:50:54.577402 | orchestrator | Saturday 28 March 2026 02:50:51 +0000 (0:00:01.161) 0:00:03.772 ******** 2026-03-28 02:50:54.577423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577499 | orchestrator | 2026-03-28 02:50:54.577510 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-28 02:50:54.577522 | orchestrator | Saturday 28 March 2026 02:50:53 +0000 (0:00:01.483) 0:00:05.256 ******** 2026-03-28 02:50:54.577533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:50:54.577563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501091 | orchestrator | 2026-03-28 02:51:19.501100 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-28 02:51:19.501107 | orchestrator | Saturday 28 March 2026 02:50:54 +0000 (0:00:01.128) 0:00:06.384 ******** 2026-03-28 02:51:19.501114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501122 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501183 | orchestrator | 2026-03-28 02:51:19.501190 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-28 02:51:19.501197 | orchestrator | Saturday 28 March 2026 02:50:56 +0000 (0:00:01.536) 0:00:07.920 ******** 2026-03-28 02:51:19.501208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:51:19.501254 | orchestrator | 2026-03-28 02:51:19.501261 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-28 02:51:19.501268 | orchestrator | Saturday 28 March 2026 02:50:57 +0000 (0:00:01.301) 0:00:09.221 ******** 2026-03-28 02:51:19.501275 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:51:19.501283 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:51:19.501289 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:51:19.501296 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:51:19.501303 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:51:19.501309 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:51:19.501316 | orchestrator | 2026-03-28 02:51:19.501322 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-28 02:51:19.501329 | orchestrator | Saturday 28 March 2026 02:50:59 +0000 (0:00:02.343) 0:00:11.565 ******** 2026-03-28 02:51:19.501336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-28 02:51:19.501343 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-28 02:51:19.501349 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-28 02:51:19.501356 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-28 02:51:19.501363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-28 02:51:19.501369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-28 02:51:19.501380 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.653777 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.653882 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.653912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.653933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.654102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 02:51:56.654123 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654144 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654190 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654214 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654225 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-28 02:51:56.654236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654247 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654258 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 02:51:56.654302 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654313 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 02:51:56.654374 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654386 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654398 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 02:51:56.654447 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 02:51:56.654459 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 02:51:56.654471 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 02:51:56.654483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 02:51:56.654495 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 02:51:56.654508 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-28 02:51:56.654520 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 02:51:56.654559 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-28 02:51:56.654573 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-28 02:51:56.654592 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-28 02:51:56.654605 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-28 02:51:56.654617 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 02:51:56.654630 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-28 02:51:56.654643 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 02:51:56.654655 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 02:51:56.654668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 02:51:56.654680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 02:51:56.654692 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 02:51:56.654703 | orchestrator | 2026-03-28 02:51:56.654715 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654726 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:19.267) 0:00:30.833 ******** 2026-03-28 02:51:56.654737 | orchestrator | 2026-03-28 02:51:56.654748 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654759 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.172) 0:00:31.005 ******** 2026-03-28 02:51:56.654770 | orchestrator | 2026-03-28 02:51:56.654781 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654792 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.064) 0:00:31.070 ******** 2026-03-28 02:51:56.654802 | orchestrator | 2026-03-28 02:51:56.654813 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654824 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.060) 0:00:31.130 ******** 2026-03-28 02:51:56.654835 | orchestrator | 2026-03-28 02:51:56.654846 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654856 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.059) 0:00:31.189 ******** 2026-03-28 02:51:56.654867 | orchestrator | 2026-03-28 02:51:56.654878 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 02:51:56.654895 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.058) 0:00:31.248 ******** 2026-03-28 02:51:56.654913 | orchestrator | 2026-03-28 02:51:56.654931 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-28 02:51:56.654971 | orchestrator | Saturday 28 March 2026 02:51:19 +0000 (0:00:00.060) 0:00:31.308 ******** 2026-03-28 02:51:56.654989 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:51:56.655008 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:51:56.655028 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:51:56.655047 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:51:56.655067 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:51:56.655078 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:51:56.655089 | orchestrator | 2026-03-28 02:51:56.655100 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-28 02:51:56.655111 | orchestrator | Saturday 28 March 2026 02:51:21 +0000 (0:00:01.538) 0:00:32.847 ******** 2026-03-28 02:51:56.655131 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:51:56.655142 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:51:56.655153 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:51:56.655163 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:51:56.655173 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:51:56.655184 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:51:56.655195 | orchestrator | 2026-03-28 02:51:56.655205 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-28 02:51:56.655216 | orchestrator | 2026-03-28 02:51:56.655227 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 02:51:56.655238 | orchestrator | Saturday 28 March 2026 02:51:54 +0000 (0:00:33.431) 0:01:06.278 ******** 2026-03-28 02:51:56.655248 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:51:56.655259 | orchestrator | 2026-03-28 02:51:56.655270 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 02:51:56.655280 | orchestrator | Saturday 28 March 2026 02:51:55 +0000 (0:00:00.708) 0:01:06.987 ******** 2026-03-28 02:51:56.655291 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:51:56.655302 | orchestrator | 2026-03-28 02:51:56.655312 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-28 02:51:56.655323 | orchestrator | Saturday 28 March 2026 02:51:55 +0000 (0:00:00.547) 0:01:07.534 ******** 2026-03-28 02:51:56.655334 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:51:56.655344 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:51:56.655355 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:51:56.655366 | orchestrator | 2026-03-28 02:51:56.655377 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-28 02:51:56.655396 | orchestrator | Saturday 28 March 2026 02:51:56 +0000 (0:00:00.925) 0:01:08.460 ******** 2026-03-28 02:52:07.421218 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.421310 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.421323 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.421332 | orchestrator | 2026-03-28 02:52:07.421342 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-28 02:52:07.421362 | orchestrator | Saturday 28 March 2026 02:51:56 +0000 (0:00:00.294) 0:01:08.754 ******** 2026-03-28 02:52:07.421378 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.421387 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.421395 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.421404 | orchestrator | 2026-03-28 02:52:07.421412 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-28 02:52:07.421420 | orchestrator | Saturday 28 March 2026 02:51:57 +0000 (0:00:00.290) 0:01:09.045 ******** 2026-03-28 02:52:07.421428 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.421436 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.421445 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.421453 | orchestrator | 2026-03-28 02:52:07.421461 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-28 02:52:07.421469 | orchestrator | Saturday 28 March 2026 02:51:57 +0000 (0:00:00.296) 0:01:09.342 ******** 2026-03-28 02:52:07.421477 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.421485 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.421493 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.421501 | orchestrator | 2026-03-28 02:52:07.421509 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-28 02:52:07.421517 | orchestrator | Saturday 28 March 2026 02:51:57 +0000 (0:00:00.399) 0:01:09.742 ******** 2026-03-28 02:52:07.421525 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421542 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421550 | orchestrator | 2026-03-28 02:52:07.421558 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-28 02:52:07.421580 | orchestrator | Saturday 28 March 2026 02:51:58 +0000 (0:00:00.282) 0:01:10.025 ******** 2026-03-28 02:52:07.421589 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421605 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421613 | orchestrator | 2026-03-28 02:52:07.421621 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-28 02:52:07.421629 | orchestrator | Saturday 28 March 2026 02:51:58 +0000 (0:00:00.302) 0:01:10.328 ******** 2026-03-28 02:52:07.421637 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421645 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421652 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421660 | orchestrator | 2026-03-28 02:52:07.421669 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-28 02:52:07.421677 | orchestrator | Saturday 28 March 2026 02:51:58 +0000 (0:00:00.283) 0:01:10.611 ******** 2026-03-28 02:52:07.421684 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421692 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421700 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421708 | orchestrator | 2026-03-28 02:52:07.421716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-28 02:52:07.421724 | orchestrator | Saturday 28 March 2026 02:51:59 +0000 (0:00:00.288) 0:01:10.899 ******** 2026-03-28 02:52:07.421732 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421740 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421748 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421756 | orchestrator | 2026-03-28 02:52:07.421764 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-28 02:52:07.421772 | orchestrator | Saturday 28 March 2026 02:51:59 +0000 (0:00:00.381) 0:01:11.281 ******** 2026-03-28 02:52:07.421780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421796 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421804 | orchestrator | 2026-03-28 02:52:07.421812 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-28 02:52:07.421820 | orchestrator | Saturday 28 March 2026 02:51:59 +0000 (0:00:00.271) 0:01:11.552 ******** 2026-03-28 02:52:07.421828 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421836 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421844 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421851 | orchestrator | 2026-03-28 02:52:07.421859 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-28 02:52:07.421867 | orchestrator | Saturday 28 March 2026 02:52:00 +0000 (0:00:00.282) 0:01:11.835 ******** 2026-03-28 02:52:07.421875 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421883 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421891 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421899 | orchestrator | 2026-03-28 02:52:07.421906 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-28 02:52:07.421914 | orchestrator | Saturday 28 March 2026 02:52:00 +0000 (0:00:00.284) 0:01:12.119 ******** 2026-03-28 02:52:07.421922 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.421930 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.421938 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.421970 | orchestrator | 2026-03-28 02:52:07.421979 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-28 02:52:07.421987 | orchestrator | Saturday 28 March 2026 02:52:00 +0000 (0:00:00.468) 0:01:12.588 ******** 2026-03-28 02:52:07.421995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422003 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422011 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422053 | orchestrator | 2026-03-28 02:52:07.422062 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-28 02:52:07.422076 | orchestrator | Saturday 28 March 2026 02:52:01 +0000 (0:00:00.285) 0:01:12.874 ******** 2026-03-28 02:52:07.422084 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422092 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422100 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422107 | orchestrator | 2026-03-28 02:52:07.422115 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-28 02:52:07.422123 | orchestrator | Saturday 28 March 2026 02:52:01 +0000 (0:00:00.318) 0:01:13.193 ******** 2026-03-28 02:52:07.422143 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422152 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422160 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422168 | orchestrator | 2026-03-28 02:52:07.422176 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 02:52:07.422189 | orchestrator | Saturday 28 March 2026 02:52:01 +0000 (0:00:00.288) 0:01:13.481 ******** 2026-03-28 02:52:07.422198 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:52:07.422206 | orchestrator | 2026-03-28 02:52:07.422214 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-28 02:52:07.422222 | orchestrator | Saturday 28 March 2026 02:52:02 +0000 (0:00:00.746) 0:01:14.228 ******** 2026-03-28 02:52:07.422230 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.422238 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.422246 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.422254 | orchestrator | 2026-03-28 02:52:07.422262 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-28 02:52:07.422270 | orchestrator | Saturday 28 March 2026 02:52:02 +0000 (0:00:00.456) 0:01:14.684 ******** 2026-03-28 02:52:07.422278 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:07.422286 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:07.422293 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:07.422301 | orchestrator | 2026-03-28 02:52:07.422309 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-28 02:52:07.422317 | orchestrator | Saturday 28 March 2026 02:52:03 +0000 (0:00:00.472) 0:01:15.157 ******** 2026-03-28 02:52:07.422325 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422333 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422341 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422349 | orchestrator | 2026-03-28 02:52:07.422357 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-28 02:52:07.422365 | orchestrator | Saturday 28 March 2026 02:52:03 +0000 (0:00:00.358) 0:01:15.516 ******** 2026-03-28 02:52:07.422373 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422381 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422397 | orchestrator | 2026-03-28 02:52:07.422405 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-28 02:52:07.422413 | orchestrator | Saturday 28 March 2026 02:52:04 +0000 (0:00:00.620) 0:01:16.137 ******** 2026-03-28 02:52:07.422420 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422444 | orchestrator | 2026-03-28 02:52:07.422452 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-28 02:52:07.422460 | orchestrator | Saturday 28 March 2026 02:52:04 +0000 (0:00:00.373) 0:01:16.510 ******** 2026-03-28 02:52:07.422468 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422476 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422492 | orchestrator | 2026-03-28 02:52:07.422500 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-28 02:52:07.422508 | orchestrator | Saturday 28 March 2026 02:52:05 +0000 (0:00:00.352) 0:01:16.863 ******** 2026-03-28 02:52:07.422524 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422532 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422540 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422548 | orchestrator | 2026-03-28 02:52:07.422556 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-28 02:52:07.422564 | orchestrator | Saturday 28 March 2026 02:52:05 +0000 (0:00:00.325) 0:01:17.189 ******** 2026-03-28 02:52:07.422572 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:07.422580 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:07.422587 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:07.422595 | orchestrator | 2026-03-28 02:52:07.422603 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 02:52:07.422611 | orchestrator | Saturday 28 March 2026 02:52:05 +0000 (0:00:00.553) 0:01:17.742 ******** 2026-03-28 02:52:07.422621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:07.422631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:07.422639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:07.422662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469557 | orchestrator | 2026-03-28 02:52:13.469564 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 02:52:13.469572 | orchestrator | Saturday 28 March 2026 02:52:07 +0000 (0:00:01.486) 0:01:19.229 ******** 2026-03-28 02:52:13.469579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469673 | orchestrator | 2026-03-28 02:52:13.469677 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 02:52:13.469681 | orchestrator | Saturday 28 March 2026 02:52:11 +0000 (0:00:03.620) 0:01:22.849 ******** 2026-03-28 02:52:13.469685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:13.469712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.230591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.230788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.230825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.230846 | orchestrator | 2026-03-28 02:52:42.230868 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:42.230889 | orchestrator | Saturday 28 March 2026 02:52:13 +0000 (0:00:02.023) 0:01:24.872 ******** 2026-03-28 02:52:42.230926 | orchestrator | 2026-03-28 02:52:42.230944 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:42.230993 | orchestrator | Saturday 28 March 2026 02:52:13 +0000 (0:00:00.068) 0:01:24.941 ******** 2026-03-28 02:52:42.231014 | orchestrator | 2026-03-28 02:52:42.231033 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:42.231052 | orchestrator | Saturday 28 March 2026 02:52:13 +0000 (0:00:00.264) 0:01:25.205 ******** 2026-03-28 02:52:42.231070 | orchestrator | 2026-03-28 02:52:42.231092 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 02:52:42.231111 | orchestrator | Saturday 28 March 2026 02:52:13 +0000 (0:00:00.065) 0:01:25.271 ******** 2026-03-28 02:52:42.231131 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:52:42.231145 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:52:42.231158 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:52:42.231171 | orchestrator | 2026-03-28 02:52:42.231184 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 02:52:42.231196 | orchestrator | Saturday 28 March 2026 02:52:20 +0000 (0:00:06.844) 0:01:32.115 ******** 2026-03-28 02:52:42.231209 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:52:42.231222 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:52:42.231234 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:52:42.231247 | orchestrator | 2026-03-28 02:52:42.231259 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 02:52:42.231271 | orchestrator | Saturday 28 March 2026 02:52:27 +0000 (0:00:07.386) 0:01:39.502 ******** 2026-03-28 02:52:42.231284 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:52:42.231296 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:52:42.231308 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:52:42.231320 | orchestrator | 2026-03-28 02:52:42.231333 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 02:52:42.231345 | orchestrator | Saturday 28 March 2026 02:52:35 +0000 (0:00:07.437) 0:01:46.939 ******** 2026-03-28 02:52:42.231357 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:52:42.231370 | orchestrator | 2026-03-28 02:52:42.231382 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 02:52:42.231395 | orchestrator | Saturday 28 March 2026 02:52:35 +0000 (0:00:00.136) 0:01:47.076 ******** 2026-03-28 02:52:42.231407 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:42.231420 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:42.231431 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:42.231442 | orchestrator | 2026-03-28 02:52:42.231453 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 02:52:42.231464 | orchestrator | Saturday 28 March 2026 02:52:36 +0000 (0:00:01.056) 0:01:48.132 ******** 2026-03-28 02:52:42.231475 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:42.231499 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:42.231510 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:52:42.231521 | orchestrator | 2026-03-28 02:52:42.231532 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 02:52:42.231542 | orchestrator | Saturday 28 March 2026 02:52:37 +0000 (0:00:00.704) 0:01:48.836 ******** 2026-03-28 02:52:42.231551 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:42.231561 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:42.231570 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:42.231580 | orchestrator | 2026-03-28 02:52:42.231590 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 02:52:42.231615 | orchestrator | Saturday 28 March 2026 02:52:37 +0000 (0:00:00.792) 0:01:49.629 ******** 2026-03-28 02:52:42.231625 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:52:42.231634 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:52:42.231644 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:52:42.231653 | orchestrator | 2026-03-28 02:52:42.231663 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 02:52:42.231673 | orchestrator | Saturday 28 March 2026 02:52:38 +0000 (0:00:00.642) 0:01:50.272 ******** 2026-03-28 02:52:42.231683 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:42.231692 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:42.231723 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:42.231733 | orchestrator | 2026-03-28 02:52:42.231743 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 02:52:42.231753 | orchestrator | Saturday 28 March 2026 02:52:39 +0000 (0:00:01.248) 0:01:51.520 ******** 2026-03-28 02:52:42.231762 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:42.231772 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:42.231781 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:42.231791 | orchestrator | 2026-03-28 02:52:42.231801 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-28 02:52:42.231811 | orchestrator | Saturday 28 March 2026 02:52:40 +0000 (0:00:00.754) 0:01:52.274 ******** 2026-03-28 02:52:42.231820 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:52:42.231830 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:52:42.231840 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:52:42.231849 | orchestrator | 2026-03-28 02:52:42.231859 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 02:52:42.231871 | orchestrator | Saturday 28 March 2026 02:52:40 +0000 (0:00:00.300) 0:01:52.575 ******** 2026-03-28 02:52:42.231891 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.231909 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.231925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.231941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.231994 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.232010 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.232027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.232052 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:42.232083 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501026 | orchestrator | 2026-03-28 02:52:49.501121 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 02:52:49.501132 | orchestrator | Saturday 28 March 2026 02:52:42 +0000 (0:00:01.454) 0:01:54.029 ******** 2026-03-28 02:52:49.501142 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501152 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501160 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501167 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501243 | orchestrator | 2026-03-28 02:52:49.501251 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-28 02:52:49.501258 | orchestrator | Saturday 28 March 2026 02:52:46 +0000 (0:00:03.916) 0:01:57.945 ******** 2026-03-28 02:52:49.501278 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501286 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501293 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501300 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 02:52:49.501353 | orchestrator | 2026-03-28 02:52:49.501360 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:49.501366 | orchestrator | Saturday 28 March 2026 02:52:49 +0000 (0:00:03.134) 0:02:01.080 ******** 2026-03-28 02:52:49.501373 | orchestrator | 2026-03-28 02:52:49.501380 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:49.501387 | orchestrator | Saturday 28 March 2026 02:52:49 +0000 (0:00:00.066) 0:02:01.146 ******** 2026-03-28 02:52:49.501394 | orchestrator | 2026-03-28 02:52:49.501400 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 02:52:49.501407 | orchestrator | Saturday 28 March 2026 02:52:49 +0000 (0:00:00.069) 0:02:01.215 ******** 2026-03-28 02:52:49.501414 | orchestrator | 2026-03-28 02:52:49.501425 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 02:53:13.745714 | orchestrator | Saturday 28 March 2026 02:52:49 +0000 (0:00:00.072) 0:02:01.287 ******** 2026-03-28 02:53:13.745870 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:53:13.745901 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:53:13.745921 | orchestrator | 2026-03-28 02:53:13.745940 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 02:53:13.745959 | orchestrator | Saturday 28 March 2026 02:52:55 +0000 (0:00:06.177) 0:02:07.465 ******** 2026-03-28 02:53:13.746191 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:53:13.746218 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:53:13.746239 | orchestrator | 2026-03-28 02:53:13.746259 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 02:53:13.746316 | orchestrator | Saturday 28 March 2026 02:53:01 +0000 (0:00:06.263) 0:02:13.729 ******** 2026-03-28 02:53:13.746339 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:53:13.746360 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:53:13.746380 | orchestrator | 2026-03-28 02:53:13.746401 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 02:53:13.746420 | orchestrator | Saturday 28 March 2026 02:53:08 +0000 (0:00:06.165) 0:02:19.894 ******** 2026-03-28 02:53:13.746439 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:53:13.746457 | orchestrator | 2026-03-28 02:53:13.746475 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 02:53:13.746494 | orchestrator | Saturday 28 March 2026 02:53:08 +0000 (0:00:00.139) 0:02:20.034 ******** 2026-03-28 02:53:13.746513 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:53:13.746532 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:53:13.746552 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:53:13.746571 | orchestrator | 2026-03-28 02:53:13.746589 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 02:53:13.746609 | orchestrator | Saturday 28 March 2026 02:53:09 +0000 (0:00:01.075) 0:02:21.109 ******** 2026-03-28 02:53:13.746628 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:53:13.746647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:53:13.746665 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:53:13.746683 | orchestrator | 2026-03-28 02:53:13.746702 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 02:53:13.746722 | orchestrator | Saturday 28 March 2026 02:53:09 +0000 (0:00:00.650) 0:02:21.759 ******** 2026-03-28 02:53:13.746741 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:53:13.746760 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:53:13.746779 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:53:13.746797 | orchestrator | 2026-03-28 02:53:13.746814 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 02:53:13.746832 | orchestrator | Saturday 28 March 2026 02:53:10 +0000 (0:00:00.849) 0:02:22.609 ******** 2026-03-28 02:53:13.746850 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:53:13.746869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:53:13.746888 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:53:13.746906 | orchestrator | 2026-03-28 02:53:13.746924 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 02:53:13.746943 | orchestrator | Saturday 28 March 2026 02:53:11 +0000 (0:00:00.615) 0:02:23.224 ******** 2026-03-28 02:53:13.746961 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:53:13.747015 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:53:13.747033 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:53:13.747052 | orchestrator | 2026-03-28 02:53:13.747069 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 02:53:13.747086 | orchestrator | Saturday 28 March 2026 02:53:12 +0000 (0:00:01.052) 0:02:24.277 ******** 2026-03-28 02:53:13.747104 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:53:13.747122 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:53:13.747140 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:53:13.747157 | orchestrator | 2026-03-28 02:53:13.747177 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:53:13.747198 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 02:53:13.747218 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 02:53:13.747237 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-28 02:53:13.747255 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:53:13.747299 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:53:13.747312 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 02:53:13.747323 | orchestrator | 2026-03-28 02:53:13.747334 | orchestrator | 2026-03-28 02:53:13.747362 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:53:13.747373 | orchestrator | Saturday 28 March 2026 02:53:13 +0000 (0:00:00.887) 0:02:25.164 ******** 2026-03-28 02:53:13.747400 | orchestrator | =============================================================================== 2026-03-28 02:53:13.747411 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.43s 2026-03-28 02:53:13.747434 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.27s 2026-03-28 02:53:13.747445 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.65s 2026-03-28 02:53:13.747456 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.60s 2026-03-28 02:53:13.747467 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.02s 2026-03-28 02:53:13.747502 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2026-03-28 02:53:13.747514 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.62s 2026-03-28 02:53:13.747525 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.13s 2026-03-28 02:53:13.747536 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.34s 2026-03-28 02:53:13.747547 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.02s 2026-03-28 02:53:13.747557 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.54s 2026-03-28 02:53:13.747568 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2026-03-28 02:53:13.747579 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2026-03-28 02:53:13.747590 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.48s 2026-03-28 02:53:13.747600 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-28 02:53:13.747611 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.30s 2026-03-28 02:53:13.747622 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.25s 2026-03-28 02:53:13.747633 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.16s 2026-03-28 02:53:13.747650 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.13s 2026-03-28 02:53:13.747677 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.08s 2026-03-28 02:53:14.088470 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 02:53:14.088538 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-28 02:53:16.317110 | orchestrator | 2026-03-28 02:53:16 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-28 02:53:26.403742 | orchestrator | 2026-03-28 02:53:26 | INFO  | Task 3d121650-c57f-4f80-b8b8-ba98d4670c69 (wipe-partitions) was prepared for execution. 2026-03-28 02:53:26.403836 | orchestrator | 2026-03-28 02:53:26 | INFO  | It takes a moment until task 3d121650-c57f-4f80-b8b8-ba98d4670c69 (wipe-partitions) has been started and output is visible here. 2026-03-28 02:53:39.591694 | orchestrator | 2026-03-28 02:53:39.591777 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-28 02:53:39.591786 | orchestrator | 2026-03-28 02:53:39.591792 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-28 02:53:39.591798 | orchestrator | Saturday 28 March 2026 02:53:30 +0000 (0:00:00.143) 0:00:00.143 ******** 2026-03-28 02:53:39.591831 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:53:39.591839 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:53:39.591844 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:53:39.591849 | orchestrator | 2026-03-28 02:53:39.591862 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-28 02:53:39.591868 | orchestrator | Saturday 28 March 2026 02:53:31 +0000 (0:00:00.620) 0:00:00.763 ******** 2026-03-28 02:53:39.591873 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:53:39.591878 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:53:39.591883 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:53:39.591888 | orchestrator | 2026-03-28 02:53:39.591893 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-28 02:53:39.591899 | orchestrator | Saturday 28 March 2026 02:53:31 +0000 (0:00:00.417) 0:00:01.181 ******** 2026-03-28 02:53:39.591904 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:53:39.591910 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:53:39.591915 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:53:39.591920 | orchestrator | 2026-03-28 02:53:39.591925 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-28 02:53:39.591930 | orchestrator | Saturday 28 March 2026 02:53:32 +0000 (0:00:00.579) 0:00:01.761 ******** 2026-03-28 02:53:39.591935 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:53:39.591940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:53:39.591946 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:53:39.591951 | orchestrator | 2026-03-28 02:53:39.591956 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-28 02:53:39.591961 | orchestrator | Saturday 28 March 2026 02:53:32 +0000 (0:00:00.269) 0:00:02.030 ******** 2026-03-28 02:53:39.591967 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 02:53:39.591972 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 02:53:39.591977 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 02:53:39.592011 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 02:53:39.592016 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 02:53:39.592021 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 02:53:39.592037 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 02:53:39.592042 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 02:53:39.592047 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 02:53:39.592052 | orchestrator | 2026-03-28 02:53:39.592057 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-28 02:53:39.592063 | orchestrator | Saturday 28 March 2026 02:53:33 +0000 (0:00:01.256) 0:00:03.287 ******** 2026-03-28 02:53:39.592068 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 02:53:39.592073 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 02:53:39.592078 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 02:53:39.592083 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 02:53:39.592088 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 02:53:39.592093 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 02:53:39.592098 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 02:53:39.592103 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 02:53:39.592109 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 02:53:39.592114 | orchestrator | 2026-03-28 02:53:39.592119 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-28 02:53:39.592124 | orchestrator | Saturday 28 March 2026 02:53:35 +0000 (0:00:01.596) 0:00:04.883 ******** 2026-03-28 02:53:39.592129 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 02:53:39.592134 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 02:53:39.592139 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 02:53:39.592144 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 02:53:39.592155 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 02:53:39.592160 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 02:53:39.592165 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 02:53:39.592170 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 02:53:39.592175 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 02:53:39.592180 | orchestrator | 2026-03-28 02:53:39.592185 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-28 02:53:39.592190 | orchestrator | Saturday 28 March 2026 02:53:37 +0000 (0:00:02.320) 0:00:07.204 ******** 2026-03-28 02:53:39.592195 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:53:39.592200 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:53:39.592205 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:53:39.592210 | orchestrator | 2026-03-28 02:53:39.592216 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-28 02:53:39.592221 | orchestrator | Saturday 28 March 2026 02:53:38 +0000 (0:00:00.708) 0:00:07.912 ******** 2026-03-28 02:53:39.592226 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:53:39.592231 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:53:39.592236 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:53:39.592241 | orchestrator | 2026-03-28 02:53:39.592246 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:53:39.592252 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:53:39.592260 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:53:39.592276 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:53:39.592282 | orchestrator | 2026-03-28 02:53:39.592289 | orchestrator | 2026-03-28 02:53:39.592295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:53:39.592301 | orchestrator | Saturday 28 March 2026 02:53:39 +0000 (0:00:00.690) 0:00:08.603 ******** 2026-03-28 02:53:39.592307 | orchestrator | =============================================================================== 2026-03-28 02:53:39.592312 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2026-03-28 02:53:39.592318 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2026-03-28 02:53:39.592324 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-03-28 02:53:39.592330 | orchestrator | Reload udev rules ------------------------------------------------------- 0.71s 2026-03-28 02:53:39.592335 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-03-28 02:53:39.592341 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2026-03-28 02:53:39.592347 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-03-28 02:53:39.592352 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-03-28 02:53:39.592358 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-03-28 02:53:52.014360 | orchestrator | 2026-03-28 02:53:52 | INFO  | Task 266b2f84-5fa0-42d7-9b8c-0d22049121c8 (facts) was prepared for execution. 2026-03-28 02:53:52.014499 | orchestrator | 2026-03-28 02:53:52 | INFO  | It takes a moment until task 266b2f84-5fa0-42d7-9b8c-0d22049121c8 (facts) has been started and output is visible here. 2026-03-28 02:54:04.965745 | orchestrator | 2026-03-28 02:54:04.965837 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 02:54:04.965848 | orchestrator | 2026-03-28 02:54:04.965855 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 02:54:04.965863 | orchestrator | Saturday 28 March 2026 02:53:56 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-28 02:54:04.965886 | orchestrator | ok: [testbed-manager] 2026-03-28 02:54:04.965894 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:54:04.965900 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:54:04.965906 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:54:04.965912 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:04.965919 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:04.965925 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:04.965931 | orchestrator | 2026-03-28 02:54:04.965937 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 02:54:04.965944 | orchestrator | Saturday 28 March 2026 02:53:57 +0000 (0:00:01.138) 0:00:01.410 ******** 2026-03-28 02:54:04.965950 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:54:04.965957 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:54:04.965963 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:54:04.965969 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:54:04.965975 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:04.965981 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:04.965987 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:04.966095 | orchestrator | 2026-03-28 02:54:04.966102 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 02:54:04.966108 | orchestrator | 2026-03-28 02:54:04.966116 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:54:04.966126 | orchestrator | Saturday 28 March 2026 02:53:58 +0000 (0:00:01.277) 0:00:02.688 ******** 2026-03-28 02:54:04.966136 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:54:04.966146 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:54:04.966156 | orchestrator | ok: [testbed-manager] 2026-03-28 02:54:04.966165 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:54:04.966174 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:04.966184 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:04.966194 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:04.966204 | orchestrator | 2026-03-28 02:54:04.966213 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 02:54:04.966223 | orchestrator | 2026-03-28 02:54:04.966232 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 02:54:04.966243 | orchestrator | Saturday 28 March 2026 02:54:03 +0000 (0:00:05.124) 0:00:07.813 ******** 2026-03-28 02:54:04.966252 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:54:04.966262 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:54:04.966273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:54:04.966283 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:54:04.966293 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:04.966303 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:04.966313 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:04.966323 | orchestrator | 2026-03-28 02:54:04.966333 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:54:04.966344 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966398 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966410 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966420 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966431 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966441 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966462 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:54:04.966472 | orchestrator | 2026-03-28 02:54:04.966483 | orchestrator | 2026-03-28 02:54:04.966493 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:54:04.966504 | orchestrator | Saturday 28 March 2026 02:54:04 +0000 (0:00:00.580) 0:00:08.393 ******** 2026-03-28 02:54:04.966514 | orchestrator | =============================================================================== 2026-03-28 02:54:04.966524 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2026-03-28 02:54:04.966535 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-28 02:54:04.966545 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-28 02:54:04.966555 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-03-28 02:54:07.399600 | orchestrator | 2026-03-28 02:54:07 | INFO  | Task a5698b40-5396-4723-abbd-31c6400f0e07 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-28 02:54:07.399688 | orchestrator | 2026-03-28 02:54:07 | INFO  | It takes a moment until task a5698b40-5396-4723-abbd-31c6400f0e07 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-28 02:54:20.921415 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 02:54:20.921530 | orchestrator | 2.16.14 2026-03-28 02:54:20.921548 | orchestrator | 2026-03-28 02:54:20.921562 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 02:54:20.921575 | orchestrator | 2026-03-28 02:54:20.921587 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:54:20.921599 | orchestrator | Saturday 28 March 2026 02:54:12 +0000 (0:00:00.376) 0:00:00.376 ******** 2026-03-28 02:54:20.921612 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:20.921623 | orchestrator | 2026-03-28 02:54:20.921653 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:54:20.921665 | orchestrator | Saturday 28 March 2026 02:54:12 +0000 (0:00:00.263) 0:00:00.639 ******** 2026-03-28 02:54:20.921677 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:20.921688 | orchestrator | 2026-03-28 02:54:20.921699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.921710 | orchestrator | Saturday 28 March 2026 02:54:12 +0000 (0:00:00.262) 0:00:00.902 ******** 2026-03-28 02:54:20.921722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 02:54:20.921733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 02:54:20.921744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 02:54:20.921755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 02:54:20.921766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 02:54:20.921777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 02:54:20.921788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 02:54:20.921799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 02:54:20.921810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 02:54:20.921821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 02:54:20.921832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 02:54:20.921843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 02:54:20.921878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 02:54:20.921890 | orchestrator | 2026-03-28 02:54:20.921901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.921913 | orchestrator | Saturday 28 March 2026 02:54:13 +0000 (0:00:00.558) 0:00:01.460 ******** 2026-03-28 02:54:20.921927 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.921941 | orchestrator | 2026-03-28 02:54:20.921954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.921967 | orchestrator | Saturday 28 March 2026 02:54:13 +0000 (0:00:00.222) 0:00:01.682 ******** 2026-03-28 02:54:20.921980 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.921993 | orchestrator | 2026-03-28 02:54:20.922152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922185 | orchestrator | Saturday 28 March 2026 02:54:13 +0000 (0:00:00.239) 0:00:01.922 ******** 2026-03-28 02:54:20.922203 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922222 | orchestrator | 2026-03-28 02:54:20.922241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922257 | orchestrator | Saturday 28 March 2026 02:54:13 +0000 (0:00:00.208) 0:00:02.130 ******** 2026-03-28 02:54:20.922274 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922292 | orchestrator | 2026-03-28 02:54:20.922310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922328 | orchestrator | Saturday 28 March 2026 02:54:14 +0000 (0:00:00.331) 0:00:02.462 ******** 2026-03-28 02:54:20.922345 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922364 | orchestrator | 2026-03-28 02:54:20.922382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922401 | orchestrator | Saturday 28 March 2026 02:54:14 +0000 (0:00:00.256) 0:00:02.719 ******** 2026-03-28 02:54:20.922418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922436 | orchestrator | 2026-03-28 02:54:20.922455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922473 | orchestrator | Saturday 28 March 2026 02:54:14 +0000 (0:00:00.227) 0:00:02.946 ******** 2026-03-28 02:54:20.922491 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922509 | orchestrator | 2026-03-28 02:54:20.922527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922545 | orchestrator | Saturday 28 March 2026 02:54:14 +0000 (0:00:00.225) 0:00:03.172 ******** 2026-03-28 02:54:20.922563 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.922583 | orchestrator | 2026-03-28 02:54:20.922599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922617 | orchestrator | Saturday 28 March 2026 02:54:15 +0000 (0:00:00.213) 0:00:03.386 ******** 2026-03-28 02:54:20.922636 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7) 2026-03-28 02:54:20.922656 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7) 2026-03-28 02:54:20.922674 | orchestrator | 2026-03-28 02:54:20.922693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922739 | orchestrator | Saturday 28 March 2026 02:54:15 +0000 (0:00:00.669) 0:00:04.055 ******** 2026-03-28 02:54:20.922759 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e) 2026-03-28 02:54:20.922777 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e) 2026-03-28 02:54:20.922797 | orchestrator | 2026-03-28 02:54:20.922815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922834 | orchestrator | Saturday 28 March 2026 02:54:16 +0000 (0:00:00.721) 0:00:04.777 ******** 2026-03-28 02:54:20.922868 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94) 2026-03-28 02:54:20.922906 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94) 2026-03-28 02:54:20.922924 | orchestrator | 2026-03-28 02:54:20.922943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.922962 | orchestrator | Saturday 28 March 2026 02:54:17 +0000 (0:00:00.929) 0:00:05.706 ******** 2026-03-28 02:54:20.922979 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2) 2026-03-28 02:54:20.923026 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2) 2026-03-28 02:54:20.923046 | orchestrator | 2026-03-28 02:54:20.923064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:20.923082 | orchestrator | Saturday 28 March 2026 02:54:17 +0000 (0:00:00.493) 0:00:06.200 ******** 2026-03-28 02:54:20.923099 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:54:20.923117 | orchestrator | 2026-03-28 02:54:20.923135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923152 | orchestrator | Saturday 28 March 2026 02:54:18 +0000 (0:00:00.365) 0:00:06.566 ******** 2026-03-28 02:54:20.923169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 02:54:20.923188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 02:54:20.923205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 02:54:20.923222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 02:54:20.923240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 02:54:20.923257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 02:54:20.923274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 02:54:20.923292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 02:54:20.923309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 02:54:20.923326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 02:54:20.923344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 02:54:20.923361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 02:54:20.923378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 02:54:20.923395 | orchestrator | 2026-03-28 02:54:20.923414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923430 | orchestrator | Saturday 28 March 2026 02:54:18 +0000 (0:00:00.412) 0:00:06.979 ******** 2026-03-28 02:54:20.923448 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923466 | orchestrator | 2026-03-28 02:54:20.923484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923501 | orchestrator | Saturday 28 March 2026 02:54:18 +0000 (0:00:00.247) 0:00:07.226 ******** 2026-03-28 02:54:20.923519 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923536 | orchestrator | 2026-03-28 02:54:20.923554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923571 | orchestrator | Saturday 28 March 2026 02:54:19 +0000 (0:00:00.294) 0:00:07.521 ******** 2026-03-28 02:54:20.923589 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923607 | orchestrator | 2026-03-28 02:54:20.923624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923641 | orchestrator | Saturday 28 March 2026 02:54:19 +0000 (0:00:00.229) 0:00:07.750 ******** 2026-03-28 02:54:20.923670 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923689 | orchestrator | 2026-03-28 02:54:20.923706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923724 | orchestrator | Saturday 28 March 2026 02:54:19 +0000 (0:00:00.253) 0:00:08.004 ******** 2026-03-28 02:54:20.923742 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923759 | orchestrator | 2026-03-28 02:54:20.923777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923795 | orchestrator | Saturday 28 March 2026 02:54:19 +0000 (0:00:00.245) 0:00:08.249 ******** 2026-03-28 02:54:20.923812 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923829 | orchestrator | 2026-03-28 02:54:20.923847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:20.923864 | orchestrator | Saturday 28 March 2026 02:54:20 +0000 (0:00:00.686) 0:00:08.936 ******** 2026-03-28 02:54:20.923882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:20.923900 | orchestrator | 2026-03-28 02:54:20.923928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.925471 | orchestrator | Saturday 28 March 2026 02:54:20 +0000 (0:00:00.243) 0:00:09.180 ******** 2026-03-28 02:54:28.925624 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.925655 | orchestrator | 2026-03-28 02:54:28.925677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.925716 | orchestrator | Saturday 28 March 2026 02:54:21 +0000 (0:00:00.222) 0:00:09.402 ******** 2026-03-28 02:54:28.925738 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 02:54:28.925759 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 02:54:28.925778 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 02:54:28.925822 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 02:54:28.925843 | orchestrator | 2026-03-28 02:54:28.925864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.925883 | orchestrator | Saturday 28 March 2026 02:54:21 +0000 (0:00:00.704) 0:00:10.107 ******** 2026-03-28 02:54:28.925902 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.925928 | orchestrator | 2026-03-28 02:54:28.925939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.925951 | orchestrator | Saturday 28 March 2026 02:54:22 +0000 (0:00:00.225) 0:00:10.332 ******** 2026-03-28 02:54:28.925962 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.925979 | orchestrator | 2026-03-28 02:54:28.926086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.926110 | orchestrator | Saturday 28 March 2026 02:54:22 +0000 (0:00:00.229) 0:00:10.562 ******** 2026-03-28 02:54:28.926131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926150 | orchestrator | 2026-03-28 02:54:28.926170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:28.926189 | orchestrator | Saturday 28 March 2026 02:54:22 +0000 (0:00:00.254) 0:00:10.816 ******** 2026-03-28 02:54:28.926209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926229 | orchestrator | 2026-03-28 02:54:28.926249 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 02:54:28.926268 | orchestrator | Saturday 28 March 2026 02:54:22 +0000 (0:00:00.247) 0:00:11.064 ******** 2026-03-28 02:54:28.926287 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-28 02:54:28.926305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-28 02:54:28.926323 | orchestrator | 2026-03-28 02:54:28.926343 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 02:54:28.926362 | orchestrator | Saturday 28 March 2026 02:54:23 +0000 (0:00:00.220) 0:00:11.285 ******** 2026-03-28 02:54:28.926381 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926399 | orchestrator | 2026-03-28 02:54:28.926419 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 02:54:28.926437 | orchestrator | Saturday 28 March 2026 02:54:23 +0000 (0:00:00.144) 0:00:11.429 ******** 2026-03-28 02:54:28.926485 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926497 | orchestrator | 2026-03-28 02:54:28.926509 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 02:54:28.926520 | orchestrator | Saturday 28 March 2026 02:54:23 +0000 (0:00:00.149) 0:00:11.579 ******** 2026-03-28 02:54:28.926531 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926542 | orchestrator | 2026-03-28 02:54:28.926553 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 02:54:28.926564 | orchestrator | Saturday 28 March 2026 02:54:23 +0000 (0:00:00.360) 0:00:11.940 ******** 2026-03-28 02:54:28.926575 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:28.926586 | orchestrator | 2026-03-28 02:54:28.926597 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 02:54:28.926608 | orchestrator | Saturday 28 March 2026 02:54:23 +0000 (0:00:00.171) 0:00:12.112 ******** 2026-03-28 02:54:28.926619 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e94d822c-120c-5920-885f-96546946f9a0'}}) 2026-03-28 02:54:28.926631 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97a2d1a8-b450-5e97-9b32-db4bafa583cb'}}) 2026-03-28 02:54:28.926646 | orchestrator | 2026-03-28 02:54:28.926665 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 02:54:28.926683 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.186) 0:00:12.298 ******** 2026-03-28 02:54:28.926703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e94d822c-120c-5920-885f-96546946f9a0'}})  2026-03-28 02:54:28.926724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97a2d1a8-b450-5e97-9b32-db4bafa583cb'}})  2026-03-28 02:54:28.926742 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926759 | orchestrator | 2026-03-28 02:54:28.926778 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 02:54:28.926796 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.181) 0:00:12.480 ******** 2026-03-28 02:54:28.926812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e94d822c-120c-5920-885f-96546946f9a0'}})  2026-03-28 02:54:28.926829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97a2d1a8-b450-5e97-9b32-db4bafa583cb'}})  2026-03-28 02:54:28.926848 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926867 | orchestrator | 2026-03-28 02:54:28.926885 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 02:54:28.926905 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.174) 0:00:12.654 ******** 2026-03-28 02:54:28.926923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e94d822c-120c-5920-885f-96546946f9a0'}})  2026-03-28 02:54:28.926969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97a2d1a8-b450-5e97-9b32-db4bafa583cb'}})  2026-03-28 02:54:28.926981 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.926992 | orchestrator | 2026-03-28 02:54:28.927103 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 02:54:28.927117 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.191) 0:00:12.846 ******** 2026-03-28 02:54:28.927128 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:28.927139 | orchestrator | 2026-03-28 02:54:28.927151 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 02:54:28.927174 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.169) 0:00:13.015 ******** 2026-03-28 02:54:28.927194 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:54:28.927213 | orchestrator | 2026-03-28 02:54:28.927231 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 02:54:28.927251 | orchestrator | Saturday 28 March 2026 02:54:24 +0000 (0:00:00.175) 0:00:13.191 ******** 2026-03-28 02:54:28.927292 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927311 | orchestrator | 2026-03-28 02:54:28.927329 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 02:54:28.927346 | orchestrator | Saturday 28 March 2026 02:54:25 +0000 (0:00:00.142) 0:00:13.333 ******** 2026-03-28 02:54:28.927365 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927382 | orchestrator | 2026-03-28 02:54:28.927402 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 02:54:28.927423 | orchestrator | Saturday 28 March 2026 02:54:25 +0000 (0:00:00.147) 0:00:13.480 ******** 2026-03-28 02:54:28.927442 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927460 | orchestrator | 2026-03-28 02:54:28.927479 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 02:54:28.927498 | orchestrator | Saturday 28 March 2026 02:54:25 +0000 (0:00:00.141) 0:00:13.622 ******** 2026-03-28 02:54:28.927517 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:54:28.927535 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:28.927551 | orchestrator |  "sdb": { 2026-03-28 02:54:28.927569 | orchestrator |  "osd_lvm_uuid": "e94d822c-120c-5920-885f-96546946f9a0" 2026-03-28 02:54:28.927587 | orchestrator |  }, 2026-03-28 02:54:28.927604 | orchestrator |  "sdc": { 2026-03-28 02:54:28.927621 | orchestrator |  "osd_lvm_uuid": "97a2d1a8-b450-5e97-9b32-db4bafa583cb" 2026-03-28 02:54:28.927634 | orchestrator |  } 2026-03-28 02:54:28.927644 | orchestrator |  } 2026-03-28 02:54:28.927654 | orchestrator | } 2026-03-28 02:54:28.927664 | orchestrator | 2026-03-28 02:54:28.927674 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 02:54:28.927684 | orchestrator | Saturday 28 March 2026 02:54:25 +0000 (0:00:00.368) 0:00:13.991 ******** 2026-03-28 02:54:28.927694 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927704 | orchestrator | 2026-03-28 02:54:28.927713 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 02:54:28.927723 | orchestrator | Saturday 28 March 2026 02:54:25 +0000 (0:00:00.160) 0:00:14.151 ******** 2026-03-28 02:54:28.927733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927742 | orchestrator | 2026-03-28 02:54:28.927754 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 02:54:28.927771 | orchestrator | Saturday 28 March 2026 02:54:26 +0000 (0:00:00.165) 0:00:14.316 ******** 2026-03-28 02:54:28.927787 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:54:28.927803 | orchestrator | 2026-03-28 02:54:28.927819 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 02:54:28.927836 | orchestrator | Saturday 28 March 2026 02:54:26 +0000 (0:00:00.195) 0:00:14.512 ******** 2026-03-28 02:54:28.927852 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 02:54:28.927868 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 02:54:28.927886 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:28.927902 | orchestrator |  "sdb": { 2026-03-28 02:54:28.927919 | orchestrator |  "osd_lvm_uuid": "e94d822c-120c-5920-885f-96546946f9a0" 2026-03-28 02:54:28.927929 | orchestrator |  }, 2026-03-28 02:54:28.927939 | orchestrator |  "sdc": { 2026-03-28 02:54:28.927949 | orchestrator |  "osd_lvm_uuid": "97a2d1a8-b450-5e97-9b32-db4bafa583cb" 2026-03-28 02:54:28.927958 | orchestrator |  } 2026-03-28 02:54:28.927968 | orchestrator |  }, 2026-03-28 02:54:28.927978 | orchestrator |  "lvm_volumes": [ 2026-03-28 02:54:28.927987 | orchestrator |  { 2026-03-28 02:54:28.927997 | orchestrator |  "data": "osd-block-e94d822c-120c-5920-885f-96546946f9a0", 2026-03-28 02:54:28.928035 | orchestrator |  "data_vg": "ceph-e94d822c-120c-5920-885f-96546946f9a0" 2026-03-28 02:54:28.928045 | orchestrator |  }, 2026-03-28 02:54:28.928054 | orchestrator |  { 2026-03-28 02:54:28.928064 | orchestrator |  "data": "osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb", 2026-03-28 02:54:28.928087 | orchestrator |  "data_vg": "ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb" 2026-03-28 02:54:28.928096 | orchestrator |  } 2026-03-28 02:54:28.928106 | orchestrator |  ] 2026-03-28 02:54:28.928115 | orchestrator |  } 2026-03-28 02:54:28.928125 | orchestrator | } 2026-03-28 02:54:28.928135 | orchestrator | 2026-03-28 02:54:28.928145 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 02:54:28.928154 | orchestrator | Saturday 28 March 2026 02:54:26 +0000 (0:00:00.241) 0:00:14.753 ******** 2026-03-28 02:54:28.928164 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:28.928173 | orchestrator | 2026-03-28 02:54:28.928183 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 02:54:28.928193 | orchestrator | 2026-03-28 02:54:28.928203 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:54:28.928212 | orchestrator | Saturday 28 March 2026 02:54:28 +0000 (0:00:01.896) 0:00:16.650 ******** 2026-03-28 02:54:28.928222 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:28.928231 | orchestrator | 2026-03-28 02:54:28.928241 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:54:28.928251 | orchestrator | Saturday 28 March 2026 02:54:28 +0000 (0:00:00.264) 0:00:16.914 ******** 2026-03-28 02:54:28.928260 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:28.928270 | orchestrator | 2026-03-28 02:54:28.928294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.517973 | orchestrator | Saturday 28 March 2026 02:54:28 +0000 (0:00:00.270) 0:00:17.184 ******** 2026-03-28 02:54:38.518152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 02:54:38.518167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 02:54:38.518177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 02:54:38.518200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 02:54:38.518208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 02:54:38.518216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 02:54:38.518225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 02:54:38.518233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 02:54:38.518241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 02:54:38.518249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 02:54:38.518257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 02:54:38.518265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 02:54:38.518273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 02:54:38.518281 | orchestrator | 2026-03-28 02:54:38.518291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518299 | orchestrator | Saturday 28 March 2026 02:54:29 +0000 (0:00:00.593) 0:00:17.778 ******** 2026-03-28 02:54:38.518307 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518315 | orchestrator | 2026-03-28 02:54:38.518324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518332 | orchestrator | Saturday 28 March 2026 02:54:29 +0000 (0:00:00.228) 0:00:18.007 ******** 2026-03-28 02:54:38.518340 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518348 | orchestrator | 2026-03-28 02:54:38.518356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518364 | orchestrator | Saturday 28 March 2026 02:54:29 +0000 (0:00:00.213) 0:00:18.221 ******** 2026-03-28 02:54:38.518392 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518401 | orchestrator | 2026-03-28 02:54:38.518409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518417 | orchestrator | Saturday 28 March 2026 02:54:30 +0000 (0:00:00.214) 0:00:18.435 ******** 2026-03-28 02:54:38.518425 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518433 | orchestrator | 2026-03-28 02:54:38.518441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518449 | orchestrator | Saturday 28 March 2026 02:54:30 +0000 (0:00:00.227) 0:00:18.663 ******** 2026-03-28 02:54:38.518456 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518464 | orchestrator | 2026-03-28 02:54:38.518472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518481 | orchestrator | Saturday 28 March 2026 02:54:30 +0000 (0:00:00.220) 0:00:18.883 ******** 2026-03-28 02:54:38.518488 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518496 | orchestrator | 2026-03-28 02:54:38.518505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518512 | orchestrator | Saturday 28 March 2026 02:54:30 +0000 (0:00:00.226) 0:00:19.110 ******** 2026-03-28 02:54:38.518520 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518528 | orchestrator | 2026-03-28 02:54:38.518538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518548 | orchestrator | Saturday 28 March 2026 02:54:31 +0000 (0:00:00.220) 0:00:19.330 ******** 2026-03-28 02:54:38.518568 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.518577 | orchestrator | 2026-03-28 02:54:38.518586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518594 | orchestrator | Saturday 28 March 2026 02:54:31 +0000 (0:00:00.226) 0:00:19.556 ******** 2026-03-28 02:54:38.518604 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785) 2026-03-28 02:54:38.518614 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785) 2026-03-28 02:54:38.518623 | orchestrator | 2026-03-28 02:54:38.518633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518642 | orchestrator | Saturday 28 March 2026 02:54:32 +0000 (0:00:00.840) 0:00:20.396 ******** 2026-03-28 02:54:38.518651 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4) 2026-03-28 02:54:38.518661 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4) 2026-03-28 02:54:38.518670 | orchestrator | 2026-03-28 02:54:38.518679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518689 | orchestrator | Saturday 28 March 2026 02:54:32 +0000 (0:00:00.704) 0:00:21.101 ******** 2026-03-28 02:54:38.518697 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab) 2026-03-28 02:54:38.518707 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab) 2026-03-28 02:54:38.518716 | orchestrator | 2026-03-28 02:54:38.518733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518757 | orchestrator | Saturday 28 March 2026 02:54:33 +0000 (0:00:01.148) 0:00:22.249 ******** 2026-03-28 02:54:38.518767 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7) 2026-03-28 02:54:38.518776 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7) 2026-03-28 02:54:38.518785 | orchestrator | 2026-03-28 02:54:38.518794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:38.518807 | orchestrator | Saturday 28 March 2026 02:54:34 +0000 (0:00:00.463) 0:00:22.713 ******** 2026-03-28 02:54:38.518817 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:54:38.518833 | orchestrator | 2026-03-28 02:54:38.518843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.518852 | orchestrator | Saturday 28 March 2026 02:54:34 +0000 (0:00:00.397) 0:00:23.111 ******** 2026-03-28 02:54:38.518861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 02:54:38.518871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 02:54:38.518881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 02:54:38.518890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 02:54:38.518898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 02:54:38.518906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 02:54:38.518913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 02:54:38.518921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 02:54:38.518929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 02:54:38.518937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 02:54:38.518946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 02:54:38.518954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 02:54:38.518962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 02:54:38.518970 | orchestrator | 2026-03-28 02:54:38.518978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.518986 | orchestrator | Saturday 28 March 2026 02:54:35 +0000 (0:00:00.440) 0:00:23.551 ******** 2026-03-28 02:54:38.518994 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519045 | orchestrator | 2026-03-28 02:54:38.519056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519064 | orchestrator | Saturday 28 March 2026 02:54:35 +0000 (0:00:00.268) 0:00:23.820 ******** 2026-03-28 02:54:38.519072 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519080 | orchestrator | 2026-03-28 02:54:38.519088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519096 | orchestrator | Saturday 28 March 2026 02:54:35 +0000 (0:00:00.219) 0:00:24.039 ******** 2026-03-28 02:54:38.519104 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519112 | orchestrator | 2026-03-28 02:54:38.519120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519128 | orchestrator | Saturday 28 March 2026 02:54:36 +0000 (0:00:00.238) 0:00:24.278 ******** 2026-03-28 02:54:38.519136 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519144 | orchestrator | 2026-03-28 02:54:38.519152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519160 | orchestrator | Saturday 28 March 2026 02:54:36 +0000 (0:00:00.226) 0:00:24.505 ******** 2026-03-28 02:54:38.519168 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519176 | orchestrator | 2026-03-28 02:54:38.519184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519192 | orchestrator | Saturday 28 March 2026 02:54:36 +0000 (0:00:00.218) 0:00:24.723 ******** 2026-03-28 02:54:38.519200 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519208 | orchestrator | 2026-03-28 02:54:38.519216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519224 | orchestrator | Saturday 28 March 2026 02:54:36 +0000 (0:00:00.231) 0:00:24.954 ******** 2026-03-28 02:54:38.519232 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519247 | orchestrator | 2026-03-28 02:54:38.519255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519263 | orchestrator | Saturday 28 March 2026 02:54:36 +0000 (0:00:00.229) 0:00:25.184 ******** 2026-03-28 02:54:38.519271 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:38.519279 | orchestrator | 2026-03-28 02:54:38.519287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519295 | orchestrator | Saturday 28 March 2026 02:54:37 +0000 (0:00:00.675) 0:00:25.860 ******** 2026-03-28 02:54:38.519303 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 02:54:38.519312 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 02:54:38.519320 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 02:54:38.519328 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 02:54:38.519336 | orchestrator | 2026-03-28 02:54:38.519344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:38.519352 | orchestrator | Saturday 28 March 2026 02:54:38 +0000 (0:00:00.690) 0:00:26.550 ******** 2026-03-28 02:54:38.519360 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086431 | orchestrator | 2026-03-28 02:54:45.086567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:45.086584 | orchestrator | Saturday 28 March 2026 02:54:38 +0000 (0:00:00.228) 0:00:26.779 ******** 2026-03-28 02:54:45.086593 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086604 | orchestrator | 2026-03-28 02:54:45.086613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:45.086622 | orchestrator | Saturday 28 March 2026 02:54:38 +0000 (0:00:00.218) 0:00:26.997 ******** 2026-03-28 02:54:45.086651 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086660 | orchestrator | 2026-03-28 02:54:45.086668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:45.086676 | orchestrator | Saturday 28 March 2026 02:54:38 +0000 (0:00:00.209) 0:00:27.206 ******** 2026-03-28 02:54:45.086684 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086692 | orchestrator | 2026-03-28 02:54:45.086700 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 02:54:45.086707 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.234) 0:00:27.441 ******** 2026-03-28 02:54:45.086716 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-28 02:54:45.086724 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-28 02:54:45.086732 | orchestrator | 2026-03-28 02:54:45.086740 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 02:54:45.086748 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.194) 0:00:27.636 ******** 2026-03-28 02:54:45.086756 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086764 | orchestrator | 2026-03-28 02:54:45.086772 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 02:54:45.086780 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.159) 0:00:27.796 ******** 2026-03-28 02:54:45.086789 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086797 | orchestrator | 2026-03-28 02:54:45.086805 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 02:54:45.086813 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.142) 0:00:27.938 ******** 2026-03-28 02:54:45.086822 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086830 | orchestrator | 2026-03-28 02:54:45.086838 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 02:54:45.086845 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.146) 0:00:28.085 ******** 2026-03-28 02:54:45.086853 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:45.086861 | orchestrator | 2026-03-28 02:54:45.086869 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 02:54:45.086877 | orchestrator | Saturday 28 March 2026 02:54:39 +0000 (0:00:00.147) 0:00:28.232 ******** 2026-03-28 02:54:45.086912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '80a8d2d8-5d5c-5988-8f38-8985bde94181'}}) 2026-03-28 02:54:45.086921 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}}) 2026-03-28 02:54:45.086929 | orchestrator | 2026-03-28 02:54:45.086937 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 02:54:45.086944 | orchestrator | Saturday 28 March 2026 02:54:40 +0000 (0:00:00.204) 0:00:28.436 ******** 2026-03-28 02:54:45.086954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '80a8d2d8-5d5c-5988-8f38-8985bde94181'}})  2026-03-28 02:54:45.086965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}})  2026-03-28 02:54:45.086974 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.086982 | orchestrator | 2026-03-28 02:54:45.086990 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 02:54:45.086998 | orchestrator | Saturday 28 March 2026 02:54:40 +0000 (0:00:00.402) 0:00:28.839 ******** 2026-03-28 02:54:45.087067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '80a8d2d8-5d5c-5988-8f38-8985bde94181'}})  2026-03-28 02:54:45.087078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}})  2026-03-28 02:54:45.087085 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087093 | orchestrator | 2026-03-28 02:54:45.087100 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 02:54:45.087108 | orchestrator | Saturday 28 March 2026 02:54:40 +0000 (0:00:00.167) 0:00:29.007 ******** 2026-03-28 02:54:45.087115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '80a8d2d8-5d5c-5988-8f38-8985bde94181'}})  2026-03-28 02:54:45.087124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}})  2026-03-28 02:54:45.087131 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087138 | orchestrator | 2026-03-28 02:54:45.087145 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 02:54:45.087152 | orchestrator | Saturday 28 March 2026 02:54:40 +0000 (0:00:00.174) 0:00:29.182 ******** 2026-03-28 02:54:45.087159 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:45.087167 | orchestrator | 2026-03-28 02:54:45.087174 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 02:54:45.087181 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.157) 0:00:29.339 ******** 2026-03-28 02:54:45.087188 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:54:45.087195 | orchestrator | 2026-03-28 02:54:45.087203 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 02:54:45.087210 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.167) 0:00:29.507 ******** 2026-03-28 02:54:45.087243 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087252 | orchestrator | 2026-03-28 02:54:45.087260 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 02:54:45.087267 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.152) 0:00:29.660 ******** 2026-03-28 02:54:45.087274 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087281 | orchestrator | 2026-03-28 02:54:45.087289 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 02:54:45.087298 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.160) 0:00:29.821 ******** 2026-03-28 02:54:45.087315 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087322 | orchestrator | 2026-03-28 02:54:45.087329 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 02:54:45.087336 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.149) 0:00:29.970 ******** 2026-03-28 02:54:45.087355 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:54:45.087362 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:45.087369 | orchestrator |  "sdb": { 2026-03-28 02:54:45.087378 | orchestrator |  "osd_lvm_uuid": "80a8d2d8-5d5c-5988-8f38-8985bde94181" 2026-03-28 02:54:45.087385 | orchestrator |  }, 2026-03-28 02:54:45.087393 | orchestrator |  "sdc": { 2026-03-28 02:54:45.087401 | orchestrator |  "osd_lvm_uuid": "9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41" 2026-03-28 02:54:45.087408 | orchestrator |  } 2026-03-28 02:54:45.087416 | orchestrator |  } 2026-03-28 02:54:45.087424 | orchestrator | } 2026-03-28 02:54:45.087432 | orchestrator | 2026-03-28 02:54:45.087439 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 02:54:45.087447 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.162) 0:00:30.132 ******** 2026-03-28 02:54:45.087455 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087462 | orchestrator | 2026-03-28 02:54:45.087469 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 02:54:45.087476 | orchestrator | Saturday 28 March 2026 02:54:41 +0000 (0:00:00.134) 0:00:30.267 ******** 2026-03-28 02:54:45.087484 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087492 | orchestrator | 2026-03-28 02:54:45.087499 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 02:54:45.087506 | orchestrator | Saturday 28 March 2026 02:54:42 +0000 (0:00:00.151) 0:00:30.419 ******** 2026-03-28 02:54:45.087512 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:54:45.087519 | orchestrator | 2026-03-28 02:54:45.087526 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 02:54:45.087534 | orchestrator | Saturday 28 March 2026 02:54:42 +0000 (0:00:00.157) 0:00:30.577 ******** 2026-03-28 02:54:45.087541 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 02:54:45.087549 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 02:54:45.087556 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:45.087562 | orchestrator |  "sdb": { 2026-03-28 02:54:45.087570 | orchestrator |  "osd_lvm_uuid": "80a8d2d8-5d5c-5988-8f38-8985bde94181" 2026-03-28 02:54:45.087577 | orchestrator |  }, 2026-03-28 02:54:45.087585 | orchestrator |  "sdc": { 2026-03-28 02:54:45.087592 | orchestrator |  "osd_lvm_uuid": "9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41" 2026-03-28 02:54:45.087599 | orchestrator |  } 2026-03-28 02:54:45.087605 | orchestrator |  }, 2026-03-28 02:54:45.087613 | orchestrator |  "lvm_volumes": [ 2026-03-28 02:54:45.087619 | orchestrator |  { 2026-03-28 02:54:45.087626 | orchestrator |  "data": "osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181", 2026-03-28 02:54:45.087634 | orchestrator |  "data_vg": "ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181" 2026-03-28 02:54:45.087641 | orchestrator |  }, 2026-03-28 02:54:45.087647 | orchestrator |  { 2026-03-28 02:54:45.087654 | orchestrator |  "data": "osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41", 2026-03-28 02:54:45.087661 | orchestrator |  "data_vg": "ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41" 2026-03-28 02:54:45.087669 | orchestrator |  } 2026-03-28 02:54:45.087677 | orchestrator |  ] 2026-03-28 02:54:45.087684 | orchestrator |  } 2026-03-28 02:54:45.087690 | orchestrator | } 2026-03-28 02:54:45.087696 | orchestrator | 2026-03-28 02:54:45.087703 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 02:54:45.087709 | orchestrator | Saturday 28 March 2026 02:54:42 +0000 (0:00:00.484) 0:00:31.061 ******** 2026-03-28 02:54:45.087716 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:45.087723 | orchestrator | 2026-03-28 02:54:45.087730 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 02:54:45.087736 | orchestrator | 2026-03-28 02:54:45.087742 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:54:45.087748 | orchestrator | Saturday 28 March 2026 02:54:44 +0000 (0:00:01.244) 0:00:32.306 ******** 2026-03-28 02:54:45.087765 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:45.087772 | orchestrator | 2026-03-28 02:54:45.087778 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:54:45.087785 | orchestrator | Saturday 28 March 2026 02:54:44 +0000 (0:00:00.296) 0:00:32.602 ******** 2026-03-28 02:54:45.087791 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:45.087798 | orchestrator | 2026-03-28 02:54:45.087804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:45.087811 | orchestrator | Saturday 28 March 2026 02:54:44 +0000 (0:00:00.312) 0:00:32.915 ******** 2026-03-28 02:54:45.087818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 02:54:45.087824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 02:54:45.087831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 02:54:45.087838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 02:54:45.087844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 02:54:45.087861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 02:54:54.258753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 02:54:54.258881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 02:54:54.258904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 02:54:54.258940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 02:54:54.258959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 02:54:54.258974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 02:54:54.258990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 02:54:54.259005 | orchestrator | 2026-03-28 02:54:54.259071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259088 | orchestrator | Saturday 28 March 2026 02:54:45 +0000 (0:00:00.427) 0:00:33.342 ******** 2026-03-28 02:54:54.259105 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259117 | orchestrator | 2026-03-28 02:54:54.259127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259136 | orchestrator | Saturday 28 March 2026 02:54:45 +0000 (0:00:00.221) 0:00:33.564 ******** 2026-03-28 02:54:54.259145 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259156 | orchestrator | 2026-03-28 02:54:54.259166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259186 | orchestrator | Saturday 28 March 2026 02:54:45 +0000 (0:00:00.209) 0:00:33.773 ******** 2026-03-28 02:54:54.259196 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259207 | orchestrator | 2026-03-28 02:54:54.259217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259227 | orchestrator | Saturday 28 March 2026 02:54:45 +0000 (0:00:00.203) 0:00:33.977 ******** 2026-03-28 02:54:54.259237 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259252 | orchestrator | 2026-03-28 02:54:54.259268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259285 | orchestrator | Saturday 28 March 2026 02:54:46 +0000 (0:00:00.682) 0:00:34.659 ******** 2026-03-28 02:54:54.259301 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259317 | orchestrator | 2026-03-28 02:54:54.259333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259350 | orchestrator | Saturday 28 March 2026 02:54:46 +0000 (0:00:00.211) 0:00:34.871 ******** 2026-03-28 02:54:54.259394 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259407 | orchestrator | 2026-03-28 02:54:54.259416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259425 | orchestrator | Saturday 28 March 2026 02:54:46 +0000 (0:00:00.231) 0:00:35.102 ******** 2026-03-28 02:54:54.259434 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259443 | orchestrator | 2026-03-28 02:54:54.259452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259461 | orchestrator | Saturday 28 March 2026 02:54:47 +0000 (0:00:00.203) 0:00:35.305 ******** 2026-03-28 02:54:54.259470 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259478 | orchestrator | 2026-03-28 02:54:54.259487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259496 | orchestrator | Saturday 28 March 2026 02:54:47 +0000 (0:00:00.236) 0:00:35.542 ******** 2026-03-28 02:54:54.259505 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f) 2026-03-28 02:54:54.259514 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f) 2026-03-28 02:54:54.259523 | orchestrator | 2026-03-28 02:54:54.259532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259541 | orchestrator | Saturday 28 March 2026 02:54:47 +0000 (0:00:00.417) 0:00:35.960 ******** 2026-03-28 02:54:54.259549 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e) 2026-03-28 02:54:54.259558 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e) 2026-03-28 02:54:54.259567 | orchestrator | 2026-03-28 02:54:54.259576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259585 | orchestrator | Saturday 28 March 2026 02:54:48 +0000 (0:00:00.462) 0:00:36.422 ******** 2026-03-28 02:54:54.259593 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8) 2026-03-28 02:54:54.259602 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8) 2026-03-28 02:54:54.259611 | orchestrator | 2026-03-28 02:54:54.259620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259628 | orchestrator | Saturday 28 March 2026 02:54:48 +0000 (0:00:00.450) 0:00:36.873 ******** 2026-03-28 02:54:54.259637 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b) 2026-03-28 02:54:54.259646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b) 2026-03-28 02:54:54.259655 | orchestrator | 2026-03-28 02:54:54.259664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:54:54.259672 | orchestrator | Saturday 28 March 2026 02:54:49 +0000 (0:00:00.454) 0:00:37.327 ******** 2026-03-28 02:54:54.259681 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:54:54.259690 | orchestrator | 2026-03-28 02:54:54.259699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.259724 | orchestrator | Saturday 28 March 2026 02:54:49 +0000 (0:00:00.362) 0:00:37.689 ******** 2026-03-28 02:54:54.259733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 02:54:54.259742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 02:54:54.259751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 02:54:54.259766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 02:54:54.259775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 02:54:54.259784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 02:54:54.259800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 02:54:54.259808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 02:54:54.259817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 02:54:54.259826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 02:54:54.259834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 02:54:54.259843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 02:54:54.259852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 02:54:54.259860 | orchestrator | 2026-03-28 02:54:54.259869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.259878 | orchestrator | Saturday 28 March 2026 02:54:50 +0000 (0:00:00.678) 0:00:38.368 ******** 2026-03-28 02:54:54.259886 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259895 | orchestrator | 2026-03-28 02:54:54.259904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.259913 | orchestrator | Saturday 28 March 2026 02:54:50 +0000 (0:00:00.221) 0:00:38.590 ******** 2026-03-28 02:54:54.259921 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259930 | orchestrator | 2026-03-28 02:54:54.259939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.259948 | orchestrator | Saturday 28 March 2026 02:54:50 +0000 (0:00:00.260) 0:00:38.850 ******** 2026-03-28 02:54:54.259956 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.259965 | orchestrator | 2026-03-28 02:54:54.259974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.259982 | orchestrator | Saturday 28 March 2026 02:54:50 +0000 (0:00:00.242) 0:00:39.093 ******** 2026-03-28 02:54:54.259991 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260000 | orchestrator | 2026-03-28 02:54:54.260028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260045 | orchestrator | Saturday 28 March 2026 02:54:51 +0000 (0:00:00.223) 0:00:39.316 ******** 2026-03-28 02:54:54.260057 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260066 | orchestrator | 2026-03-28 02:54:54.260075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260084 | orchestrator | Saturday 28 March 2026 02:54:51 +0000 (0:00:00.221) 0:00:39.538 ******** 2026-03-28 02:54:54.260092 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260101 | orchestrator | 2026-03-28 02:54:54.260110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260119 | orchestrator | Saturday 28 March 2026 02:54:51 +0000 (0:00:00.259) 0:00:39.797 ******** 2026-03-28 02:54:54.260127 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260136 | orchestrator | 2026-03-28 02:54:54.260145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260154 | orchestrator | Saturday 28 March 2026 02:54:51 +0000 (0:00:00.214) 0:00:40.012 ******** 2026-03-28 02:54:54.260163 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260172 | orchestrator | 2026-03-28 02:54:54.260181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260189 | orchestrator | Saturday 28 March 2026 02:54:51 +0000 (0:00:00.221) 0:00:40.233 ******** 2026-03-28 02:54:54.260198 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 02:54:54.260207 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 02:54:54.260216 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 02:54:54.260225 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 02:54:54.260234 | orchestrator | 2026-03-28 02:54:54.260249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260258 | orchestrator | Saturday 28 March 2026 02:54:52 +0000 (0:00:00.903) 0:00:41.137 ******** 2026-03-28 02:54:54.260267 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260276 | orchestrator | 2026-03-28 02:54:54.260284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260293 | orchestrator | Saturday 28 March 2026 02:54:53 +0000 (0:00:00.209) 0:00:41.346 ******** 2026-03-28 02:54:54.260302 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260311 | orchestrator | 2026-03-28 02:54:54.260320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260329 | orchestrator | Saturday 28 March 2026 02:54:53 +0000 (0:00:00.209) 0:00:41.556 ******** 2026-03-28 02:54:54.260338 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260347 | orchestrator | 2026-03-28 02:54:54.260355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:54:54.260364 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.736) 0:00:42.293 ******** 2026-03-28 02:54:54.260373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:54.260382 | orchestrator | 2026-03-28 02:54:54.260396 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 02:54:58.623577 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.225) 0:00:42.519 ******** 2026-03-28 02:54:58.623683 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-28 02:54:58.623699 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-28 02:54:58.623710 | orchestrator | 2026-03-28 02:54:58.623723 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 02:54:58.623752 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.192) 0:00:42.711 ******** 2026-03-28 02:54:58.623766 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.623777 | orchestrator | 2026-03-28 02:54:58.623787 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 02:54:58.623798 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.153) 0:00:42.864 ******** 2026-03-28 02:54:58.623809 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.623820 | orchestrator | 2026-03-28 02:54:58.623830 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 02:54:58.623842 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.141) 0:00:43.006 ******** 2026-03-28 02:54:58.623853 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.623863 | orchestrator | 2026-03-28 02:54:58.623873 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 02:54:58.623884 | orchestrator | Saturday 28 March 2026 02:54:54 +0000 (0:00:00.156) 0:00:43.162 ******** 2026-03-28 02:54:58.623895 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:58.623906 | orchestrator | 2026-03-28 02:54:58.623916 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 02:54:58.623927 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.164) 0:00:43.327 ******** 2026-03-28 02:54:58.623939 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}}) 2026-03-28 02:54:58.623950 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e38c52ab-9b1d-5b26-b141-c51106128b29'}}) 2026-03-28 02:54:58.623962 | orchestrator | 2026-03-28 02:54:58.623973 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 02:54:58.623984 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.184) 0:00:43.512 ******** 2026-03-28 02:54:58.623995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}})  2026-03-28 02:54:58.624006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e38c52ab-9b1d-5b26-b141-c51106128b29'}})  2026-03-28 02:54:58.624055 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624090 | orchestrator | 2026-03-28 02:54:58.624103 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 02:54:58.624114 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.151) 0:00:43.663 ******** 2026-03-28 02:54:58.624125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}})  2026-03-28 02:54:58.624136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e38c52ab-9b1d-5b26-b141-c51106128b29'}})  2026-03-28 02:54:58.624148 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624160 | orchestrator | 2026-03-28 02:54:58.624171 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 02:54:58.624184 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.158) 0:00:43.822 ******** 2026-03-28 02:54:58.624195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}})  2026-03-28 02:54:58.624270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e38c52ab-9b1d-5b26-b141-c51106128b29'}})  2026-03-28 02:54:58.624284 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624295 | orchestrator | 2026-03-28 02:54:58.624306 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 02:54:58.624318 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.176) 0:00:43.999 ******** 2026-03-28 02:54:58.624329 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:58.624339 | orchestrator | 2026-03-28 02:54:58.624350 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 02:54:58.624361 | orchestrator | Saturday 28 March 2026 02:54:55 +0000 (0:00:00.151) 0:00:44.150 ******** 2026-03-28 02:54:58.624372 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:54:58.624383 | orchestrator | 2026-03-28 02:54:58.624393 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 02:54:58.624404 | orchestrator | Saturday 28 March 2026 02:54:56 +0000 (0:00:00.380) 0:00:44.531 ******** 2026-03-28 02:54:58.624415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624426 | orchestrator | 2026-03-28 02:54:58.624438 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 02:54:58.624448 | orchestrator | Saturday 28 March 2026 02:54:56 +0000 (0:00:00.150) 0:00:44.681 ******** 2026-03-28 02:54:58.624460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624471 | orchestrator | 2026-03-28 02:54:58.624482 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 02:54:58.624493 | orchestrator | Saturday 28 March 2026 02:54:56 +0000 (0:00:00.146) 0:00:44.828 ******** 2026-03-28 02:54:58.624504 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624516 | orchestrator | 2026-03-28 02:54:58.624526 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 02:54:58.624537 | orchestrator | Saturday 28 March 2026 02:54:56 +0000 (0:00:00.155) 0:00:44.984 ******** 2026-03-28 02:54:58.624549 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:54:58.624560 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:58.624571 | orchestrator |  "sdb": { 2026-03-28 02:54:58.624604 | orchestrator |  "osd_lvm_uuid": "988a6493-5e43-51ae-8e8a-a4936b4cd9b5" 2026-03-28 02:54:58.624617 | orchestrator |  }, 2026-03-28 02:54:58.624628 | orchestrator |  "sdc": { 2026-03-28 02:54:58.624638 | orchestrator |  "osd_lvm_uuid": "e38c52ab-9b1d-5b26-b141-c51106128b29" 2026-03-28 02:54:58.624648 | orchestrator |  } 2026-03-28 02:54:58.624658 | orchestrator |  } 2026-03-28 02:54:58.624668 | orchestrator | } 2026-03-28 02:54:58.624679 | orchestrator | 2026-03-28 02:54:58.624700 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 02:54:58.624710 | orchestrator | Saturday 28 March 2026 02:54:56 +0000 (0:00:00.182) 0:00:45.166 ******** 2026-03-28 02:54:58.624721 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624744 | orchestrator | 2026-03-28 02:54:58.624755 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 02:54:58.624765 | orchestrator | Saturday 28 March 2026 02:54:57 +0000 (0:00:00.139) 0:00:45.306 ******** 2026-03-28 02:54:58.624774 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624784 | orchestrator | 2026-03-28 02:54:58.624793 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 02:54:58.624803 | orchestrator | Saturday 28 March 2026 02:54:57 +0000 (0:00:00.151) 0:00:45.458 ******** 2026-03-28 02:54:58.624813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:54:58.624824 | orchestrator | 2026-03-28 02:54:58.624834 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 02:54:58.624845 | orchestrator | Saturday 28 March 2026 02:54:57 +0000 (0:00:00.141) 0:00:45.599 ******** 2026-03-28 02:54:58.624857 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 02:54:58.624867 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 02:54:58.624878 | orchestrator |  "ceph_osd_devices": { 2026-03-28 02:54:58.624888 | orchestrator |  "sdb": { 2026-03-28 02:54:58.624899 | orchestrator |  "osd_lvm_uuid": "988a6493-5e43-51ae-8e8a-a4936b4cd9b5" 2026-03-28 02:54:58.624911 | orchestrator |  }, 2026-03-28 02:54:58.624921 | orchestrator |  "sdc": { 2026-03-28 02:54:58.624932 | orchestrator |  "osd_lvm_uuid": "e38c52ab-9b1d-5b26-b141-c51106128b29" 2026-03-28 02:54:58.624943 | orchestrator |  } 2026-03-28 02:54:58.624954 | orchestrator |  }, 2026-03-28 02:54:58.624965 | orchestrator |  "lvm_volumes": [ 2026-03-28 02:54:58.624974 | orchestrator |  { 2026-03-28 02:54:58.624985 | orchestrator |  "data": "osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5", 2026-03-28 02:54:58.624997 | orchestrator |  "data_vg": "ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5" 2026-03-28 02:54:58.625008 | orchestrator |  }, 2026-03-28 02:54:58.625053 | orchestrator |  { 2026-03-28 02:54:58.625065 | orchestrator |  "data": "osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29", 2026-03-28 02:54:58.625077 | orchestrator |  "data_vg": "ceph-e38c52ab-9b1d-5b26-b141-c51106128b29" 2026-03-28 02:54:58.625089 | orchestrator |  } 2026-03-28 02:54:58.625101 | orchestrator |  ] 2026-03-28 02:54:58.625112 | orchestrator |  } 2026-03-28 02:54:58.625122 | orchestrator | } 2026-03-28 02:54:58.625131 | orchestrator | 2026-03-28 02:54:58.625141 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 02:54:58.625152 | orchestrator | Saturday 28 March 2026 02:54:57 +0000 (0:00:00.229) 0:00:45.829 ******** 2026-03-28 02:54:58.625163 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 02:54:58.625174 | orchestrator | 2026-03-28 02:54:58.625186 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:54:58.625198 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 02:54:58.625211 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 02:54:58.625222 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 02:54:58.625233 | orchestrator | 2026-03-28 02:54:58.625245 | orchestrator | 2026-03-28 02:54:58.625256 | orchestrator | 2026-03-28 02:54:58.625267 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:54:58.625278 | orchestrator | Saturday 28 March 2026 02:54:58 +0000 (0:00:01.036) 0:00:46.865 ******** 2026-03-28 02:54:58.625290 | orchestrator | =============================================================================== 2026-03-28 02:54:58.625301 | orchestrator | Write configuration file ------------------------------------------------ 4.18s 2026-03-28 02:54:58.625323 | orchestrator | Add known links to the list of available block devices ------------------ 1.58s 2026-03-28 02:54:58.625335 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2026-03-28 02:54:58.625347 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-03-28 02:54:58.625359 | orchestrator | Print configuration data ------------------------------------------------ 0.95s 2026-03-28 02:54:58.625370 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-03-28 02:54:58.625382 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2026-03-28 02:54:58.625393 | orchestrator | Get initial list of available block devices ----------------------------- 0.85s 2026-03-28 02:54:58.625404 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-03-28 02:54:58.625414 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2026-03-28 02:54:58.625423 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-03-28 02:54:58.625432 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.74s 2026-03-28 02:54:58.625442 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-03-28 02:54:58.625466 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-28 02:54:59.092199 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.71s 2026-03-28 02:54:59.092284 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-28 02:54:59.092292 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-28 02:54:59.092317 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-28 02:54:59.092324 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-28 02:54:59.092331 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-28 02:55:21.979268 | orchestrator | 2026-03-28 02:55:21 | INFO  | Task 65e242d6-b10c-4801-980a-31179273962f (sync inventory) is running in background. Output coming soon. 2026-03-28 02:55:53.241626 | orchestrator | 2026-03-28 02:55:23 | INFO  | Starting group_vars file reorganization 2026-03-28 02:55:53.241725 | orchestrator | 2026-03-28 02:55:23 | INFO  | Moved 0 file(s) to their respective directories 2026-03-28 02:55:53.241737 | orchestrator | 2026-03-28 02:55:23 | INFO  | Group_vars file reorganization completed 2026-03-28 02:55:53.241744 | orchestrator | 2026-03-28 02:55:26 | INFO  | Starting variable preparation from inventory 2026-03-28 02:55:53.241752 | orchestrator | 2026-03-28 02:55:29 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-28 02:55:53.241759 | orchestrator | 2026-03-28 02:55:29 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-28 02:55:53.241766 | orchestrator | 2026-03-28 02:55:29 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-28 02:55:53.241773 | orchestrator | 2026-03-28 02:55:29 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-28 02:55:53.241780 | orchestrator | 2026-03-28 02:55:29 | INFO  | Variable preparation completed 2026-03-28 02:55:53.241787 | orchestrator | 2026-03-28 02:55:31 | INFO  | Starting inventory overwrite handling 2026-03-28 02:55:53.241793 | orchestrator | 2026-03-28 02:55:31 | INFO  | Handling group overwrites in 99-overwrite 2026-03-28 02:55:53.241800 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removing group frr:children from 60-generic 2026-03-28 02:55:53.241807 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-28 02:55:53.241814 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-28 02:55:53.241839 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-28 02:55:53.241846 | orchestrator | 2026-03-28 02:55:31 | INFO  | Handling group overwrites in 20-roles 2026-03-28 02:55:53.241853 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-28 02:55:53.241859 | orchestrator | 2026-03-28 02:55:31 | INFO  | Removed 5 group(s) in total 2026-03-28 02:55:53.241866 | orchestrator | 2026-03-28 02:55:31 | INFO  | Inventory overwrite handling completed 2026-03-28 02:55:53.241873 | orchestrator | 2026-03-28 02:55:33 | INFO  | Starting merge of inventory files 2026-03-28 02:55:53.241879 | orchestrator | 2026-03-28 02:55:33 | INFO  | Inventory files merged successfully 2026-03-28 02:55:53.241886 | orchestrator | 2026-03-28 02:55:39 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-28 02:55:53.241893 | orchestrator | 2026-03-28 02:55:51 | INFO  | Successfully wrote ClusterShell configuration 2026-03-28 02:55:53.241900 | orchestrator | [master e2094b2] 2026-03-28-02-55 2026-03-28 02:55:53.241907 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-28 02:55:55.653784 | orchestrator | 2026-03-28 02:55:55 | INFO  | Task b589b0f6-1217-4fa6-a194-b818e3850552 (ceph-create-lvm-devices) was prepared for execution. 2026-03-28 02:55:55.653893 | orchestrator | 2026-03-28 02:55:55 | INFO  | It takes a moment until task b589b0f6-1217-4fa6-a194-b818e3850552 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-28 02:56:09.617384 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 02:56:09.617493 | orchestrator | 2.16.14 2026-03-28 02:56:09.617511 | orchestrator | 2026-03-28 02:56:09.617522 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 02:56:09.617533 | orchestrator | 2026-03-28 02:56:09.617543 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:56:09.617553 | orchestrator | Saturday 28 March 2026 02:56:00 +0000 (0:00:00.340) 0:00:00.340 ******** 2026-03-28 02:56:09.617564 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 02:56:09.617574 | orchestrator | 2026-03-28 02:56:09.617581 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:56:09.617587 | orchestrator | Saturday 28 March 2026 02:56:00 +0000 (0:00:00.271) 0:00:00.611 ******** 2026-03-28 02:56:09.617593 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:09.617599 | orchestrator | 2026-03-28 02:56:09.617604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617610 | orchestrator | Saturday 28 March 2026 02:56:00 +0000 (0:00:00.258) 0:00:00.870 ******** 2026-03-28 02:56:09.617616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 02:56:09.617622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 02:56:09.617642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 02:56:09.617647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 02:56:09.617653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 02:56:09.617658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 02:56:09.617664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 02:56:09.617669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 02:56:09.617675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 02:56:09.617680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 02:56:09.617703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 02:56:09.617709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 02:56:09.617714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 02:56:09.617720 | orchestrator | 2026-03-28 02:56:09.617725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617731 | orchestrator | Saturday 28 March 2026 02:56:01 +0000 (0:00:00.647) 0:00:01.518 ******** 2026-03-28 02:56:09.617736 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617742 | orchestrator | 2026-03-28 02:56:09.617748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617753 | orchestrator | Saturday 28 March 2026 02:56:01 +0000 (0:00:00.218) 0:00:01.737 ******** 2026-03-28 02:56:09.617759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617764 | orchestrator | 2026-03-28 02:56:09.617770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617775 | orchestrator | Saturday 28 March 2026 02:56:01 +0000 (0:00:00.209) 0:00:01.946 ******** 2026-03-28 02:56:09.617780 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617786 | orchestrator | 2026-03-28 02:56:09.617791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617797 | orchestrator | Saturday 28 March 2026 02:56:02 +0000 (0:00:00.249) 0:00:02.196 ******** 2026-03-28 02:56:09.617802 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617807 | orchestrator | 2026-03-28 02:56:09.617813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617818 | orchestrator | Saturday 28 March 2026 02:56:02 +0000 (0:00:00.258) 0:00:02.455 ******** 2026-03-28 02:56:09.617824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617829 | orchestrator | 2026-03-28 02:56:09.617835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617840 | orchestrator | Saturday 28 March 2026 02:56:02 +0000 (0:00:00.227) 0:00:02.682 ******** 2026-03-28 02:56:09.617846 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617851 | orchestrator | 2026-03-28 02:56:09.617857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617862 | orchestrator | Saturday 28 March 2026 02:56:02 +0000 (0:00:00.256) 0:00:02.939 ******** 2026-03-28 02:56:09.617868 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617873 | orchestrator | 2026-03-28 02:56:09.617879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617884 | orchestrator | Saturday 28 March 2026 02:56:03 +0000 (0:00:00.224) 0:00:03.163 ******** 2026-03-28 02:56:09.617890 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.617895 | orchestrator | 2026-03-28 02:56:09.617901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617906 | orchestrator | Saturday 28 March 2026 02:56:03 +0000 (0:00:00.216) 0:00:03.380 ******** 2026-03-28 02:56:09.617911 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7) 2026-03-28 02:56:09.617918 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7) 2026-03-28 02:56:09.617923 | orchestrator | 2026-03-28 02:56:09.617929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.617950 | orchestrator | Saturday 28 March 2026 02:56:04 +0000 (0:00:00.721) 0:00:04.101 ******** 2026-03-28 02:56:09.617960 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e) 2026-03-28 02:56:09.617969 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e) 2026-03-28 02:56:09.617977 | orchestrator | 2026-03-28 02:56:09.617987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.618004 | orchestrator | Saturday 28 March 2026 02:56:04 +0000 (0:00:00.754) 0:00:04.856 ******** 2026-03-28 02:56:09.618095 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94) 2026-03-28 02:56:09.618102 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94) 2026-03-28 02:56:09.618109 | orchestrator | 2026-03-28 02:56:09.618116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.618122 | orchestrator | Saturday 28 March 2026 02:56:05 +0000 (0:00:01.176) 0:00:06.032 ******** 2026-03-28 02:56:09.618128 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2) 2026-03-28 02:56:09.618139 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2) 2026-03-28 02:56:09.618145 | orchestrator | 2026-03-28 02:56:09.618152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:09.618158 | orchestrator | Saturday 28 March 2026 02:56:06 +0000 (0:00:00.481) 0:00:06.513 ******** 2026-03-28 02:56:09.618165 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:56:09.618171 | orchestrator | 2026-03-28 02:56:09.618177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618183 | orchestrator | Saturday 28 March 2026 02:56:06 +0000 (0:00:00.411) 0:00:06.925 ******** 2026-03-28 02:56:09.618190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 02:56:09.618196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 02:56:09.618202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 02:56:09.618208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 02:56:09.618214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 02:56:09.618221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 02:56:09.618227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 02:56:09.618233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 02:56:09.618239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 02:56:09.618245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 02:56:09.618251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 02:56:09.618257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 02:56:09.618264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 02:56:09.618270 | orchestrator | 2026-03-28 02:56:09.618276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618283 | orchestrator | Saturday 28 March 2026 02:56:07 +0000 (0:00:00.474) 0:00:07.399 ******** 2026-03-28 02:56:09.618289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618295 | orchestrator | 2026-03-28 02:56:09.618301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618308 | orchestrator | Saturday 28 March 2026 02:56:07 +0000 (0:00:00.242) 0:00:07.642 ******** 2026-03-28 02:56:09.618314 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618320 | orchestrator | 2026-03-28 02:56:09.618326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618332 | orchestrator | Saturday 28 March 2026 02:56:07 +0000 (0:00:00.254) 0:00:07.896 ******** 2026-03-28 02:56:09.618338 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618349 | orchestrator | 2026-03-28 02:56:09.618356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618362 | orchestrator | Saturday 28 March 2026 02:56:08 +0000 (0:00:00.240) 0:00:08.136 ******** 2026-03-28 02:56:09.618368 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618375 | orchestrator | 2026-03-28 02:56:09.618381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618387 | orchestrator | Saturday 28 March 2026 02:56:08 +0000 (0:00:00.216) 0:00:08.353 ******** 2026-03-28 02:56:09.618394 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618399 | orchestrator | 2026-03-28 02:56:09.618406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618412 | orchestrator | Saturday 28 March 2026 02:56:08 +0000 (0:00:00.259) 0:00:08.612 ******** 2026-03-28 02:56:09.618418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618425 | orchestrator | 2026-03-28 02:56:09.618431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:09.618437 | orchestrator | Saturday 28 March 2026 02:56:09 +0000 (0:00:00.808) 0:00:09.420 ******** 2026-03-28 02:56:09.618443 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:09.618449 | orchestrator | 2026-03-28 02:56:09.618461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.280720 | orchestrator | Saturday 28 March 2026 02:56:09 +0000 (0:00:00.266) 0:00:09.686 ******** 2026-03-28 02:56:18.280853 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.280880 | orchestrator | 2026-03-28 02:56:18.280899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.280919 | orchestrator | Saturday 28 March 2026 02:56:09 +0000 (0:00:00.264) 0:00:09.951 ******** 2026-03-28 02:56:18.280938 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 02:56:18.280957 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 02:56:18.280976 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 02:56:18.280994 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 02:56:18.281067 | orchestrator | 2026-03-28 02:56:18.281089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.281109 | orchestrator | Saturday 28 March 2026 02:56:10 +0000 (0:00:00.801) 0:00:10.753 ******** 2026-03-28 02:56:18.281129 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281147 | orchestrator | 2026-03-28 02:56:18.281166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.281183 | orchestrator | Saturday 28 March 2026 02:56:10 +0000 (0:00:00.278) 0:00:11.031 ******** 2026-03-28 02:56:18.281202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281220 | orchestrator | 2026-03-28 02:56:18.281260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.281280 | orchestrator | Saturday 28 March 2026 02:56:11 +0000 (0:00:00.231) 0:00:11.262 ******** 2026-03-28 02:56:18.281299 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281318 | orchestrator | 2026-03-28 02:56:18.281336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:18.281356 | orchestrator | Saturday 28 March 2026 02:56:11 +0000 (0:00:00.227) 0:00:11.490 ******** 2026-03-28 02:56:18.281374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281392 | orchestrator | 2026-03-28 02:56:18.281408 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 02:56:18.281425 | orchestrator | Saturday 28 March 2026 02:56:11 +0000 (0:00:00.235) 0:00:11.725 ******** 2026-03-28 02:56:18.281441 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281459 | orchestrator | 2026-03-28 02:56:18.281476 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 02:56:18.281493 | orchestrator | Saturday 28 March 2026 02:56:11 +0000 (0:00:00.144) 0:00:11.869 ******** 2026-03-28 02:56:18.281511 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e94d822c-120c-5920-885f-96546946f9a0'}}) 2026-03-28 02:56:18.281559 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97a2d1a8-b450-5e97-9b32-db4bafa583cb'}}) 2026-03-28 02:56:18.281577 | orchestrator | 2026-03-28 02:56:18.281594 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 02:56:18.281612 | orchestrator | Saturday 28 March 2026 02:56:12 +0000 (0:00:00.228) 0:00:12.098 ******** 2026-03-28 02:56:18.281631 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}) 2026-03-28 02:56:18.281650 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}) 2026-03-28 02:56:18.281667 | orchestrator | 2026-03-28 02:56:18.281684 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 02:56:18.281700 | orchestrator | Saturday 28 March 2026 02:56:14 +0000 (0:00:02.040) 0:00:14.139 ******** 2026-03-28 02:56:18.281716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.281734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.281751 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281768 | orchestrator | 2026-03-28 02:56:18.281783 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 02:56:18.281799 | orchestrator | Saturday 28 March 2026 02:56:14 +0000 (0:00:00.403) 0:00:14.542 ******** 2026-03-28 02:56:18.281816 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}) 2026-03-28 02:56:18.281832 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}) 2026-03-28 02:56:18.281850 | orchestrator | 2026-03-28 02:56:18.281868 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 02:56:18.281886 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:01.627) 0:00:16.169 ******** 2026-03-28 02:56:18.281904 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.281923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.281941 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.281960 | orchestrator | 2026-03-28 02:56:18.281977 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 02:56:18.281996 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:00.170) 0:00:16.340 ******** 2026-03-28 02:56:18.282126 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282145 | orchestrator | 2026-03-28 02:56:18.282157 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 02:56:18.282168 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:00.151) 0:00:16.492 ******** 2026-03-28 02:56:18.282179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282191 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282213 | orchestrator | 2026-03-28 02:56:18.282224 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 02:56:18.282235 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:00.159) 0:00:16.652 ******** 2026-03-28 02:56:18.282259 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282270 | orchestrator | 2026-03-28 02:56:18.282281 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 02:56:18.282292 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:00.163) 0:00:16.815 ******** 2026-03-28 02:56:18.282311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282334 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282345 | orchestrator | 2026-03-28 02:56:18.282357 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 02:56:18.282368 | orchestrator | Saturday 28 March 2026 02:56:16 +0000 (0:00:00.170) 0:00:16.986 ******** 2026-03-28 02:56:18.282378 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282389 | orchestrator | 2026-03-28 02:56:18.282400 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 02:56:18.282411 | orchestrator | Saturday 28 March 2026 02:56:17 +0000 (0:00:00.158) 0:00:17.144 ******** 2026-03-28 02:56:18.282422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282444 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282454 | orchestrator | 2026-03-28 02:56:18.282465 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 02:56:18.282476 | orchestrator | Saturday 28 March 2026 02:56:17 +0000 (0:00:00.190) 0:00:17.335 ******** 2026-03-28 02:56:18.282487 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:18.282498 | orchestrator | 2026-03-28 02:56:18.282509 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 02:56:18.282520 | orchestrator | Saturday 28 March 2026 02:56:17 +0000 (0:00:00.148) 0:00:17.484 ******** 2026-03-28 02:56:18.282531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282554 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282564 | orchestrator | 2026-03-28 02:56:18.282575 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 02:56:18.282586 | orchestrator | Saturday 28 March 2026 02:56:17 +0000 (0:00:00.163) 0:00:17.647 ******** 2026-03-28 02:56:18.282597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282619 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282630 | orchestrator | 2026-03-28 02:56:18.282641 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 02:56:18.282652 | orchestrator | Saturday 28 March 2026 02:56:17 +0000 (0:00:00.376) 0:00:18.023 ******** 2026-03-28 02:56:18.282662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:18.282673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:18.282691 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282702 | orchestrator | 2026-03-28 02:56:18.282713 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 02:56:18.282724 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.175) 0:00:18.199 ******** 2026-03-28 02:56:18.282735 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:18.282746 | orchestrator | 2026-03-28 02:56:18.282765 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 02:56:18.282796 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.156) 0:00:18.356 ******** 2026-03-28 02:56:25.518886 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.518990 | orchestrator | 2026-03-28 02:56:25.519007 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 02:56:25.519060 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.156) 0:00:18.513 ******** 2026-03-28 02:56:25.519070 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519082 | orchestrator | 2026-03-28 02:56:25.519092 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 02:56:25.519102 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.150) 0:00:18.663 ******** 2026-03-28 02:56:25.519112 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:56:25.519122 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 02:56:25.519133 | orchestrator | } 2026-03-28 02:56:25.519143 | orchestrator | 2026-03-28 02:56:25.519153 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 02:56:25.519163 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.155) 0:00:18.819 ******** 2026-03-28 02:56:25.519173 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:56:25.519183 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 02:56:25.519193 | orchestrator | } 2026-03-28 02:56:25.519202 | orchestrator | 2026-03-28 02:56:25.519212 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 02:56:25.519237 | orchestrator | Saturday 28 March 2026 02:56:18 +0000 (0:00:00.157) 0:00:18.976 ******** 2026-03-28 02:56:25.519248 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:56:25.519258 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 02:56:25.519268 | orchestrator | } 2026-03-28 02:56:25.519278 | orchestrator | 2026-03-28 02:56:25.519287 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 02:56:25.519297 | orchestrator | Saturday 28 March 2026 02:56:19 +0000 (0:00:00.189) 0:00:19.165 ******** 2026-03-28 02:56:25.519307 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:25.519317 | orchestrator | 2026-03-28 02:56:25.519327 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 02:56:25.519337 | orchestrator | Saturday 28 March 2026 02:56:19 +0000 (0:00:00.704) 0:00:19.870 ******** 2026-03-28 02:56:25.519347 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:25.519356 | orchestrator | 2026-03-28 02:56:25.519366 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 02:56:25.519376 | orchestrator | Saturday 28 March 2026 02:56:20 +0000 (0:00:00.539) 0:00:20.410 ******** 2026-03-28 02:56:25.519386 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:25.519396 | orchestrator | 2026-03-28 02:56:25.519407 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 02:56:25.519418 | orchestrator | Saturday 28 March 2026 02:56:20 +0000 (0:00:00.567) 0:00:20.978 ******** 2026-03-28 02:56:25.519429 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:25.519440 | orchestrator | 2026-03-28 02:56:25.519451 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 02:56:25.519462 | orchestrator | Saturday 28 March 2026 02:56:21 +0000 (0:00:00.390) 0:00:21.368 ******** 2026-03-28 02:56:25.519473 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519484 | orchestrator | 2026-03-28 02:56:25.519496 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 02:56:25.519529 | orchestrator | Saturday 28 March 2026 02:56:21 +0000 (0:00:00.156) 0:00:21.525 ******** 2026-03-28 02:56:25.519542 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519552 | orchestrator | 2026-03-28 02:56:25.519563 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 02:56:25.519604 | orchestrator | Saturday 28 March 2026 02:56:21 +0000 (0:00:00.123) 0:00:21.648 ******** 2026-03-28 02:56:25.519615 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:56:25.519626 | orchestrator |  "vgs_report": { 2026-03-28 02:56:25.519638 | orchestrator |  "vg": [] 2026-03-28 02:56:25.519651 | orchestrator |  } 2026-03-28 02:56:25.519662 | orchestrator | } 2026-03-28 02:56:25.519673 | orchestrator | 2026-03-28 02:56:25.519683 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 02:56:25.519693 | orchestrator | Saturday 28 March 2026 02:56:21 +0000 (0:00:00.154) 0:00:21.802 ******** 2026-03-28 02:56:25.519703 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519712 | orchestrator | 2026-03-28 02:56:25.519722 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 02:56:25.519732 | orchestrator | Saturday 28 March 2026 02:56:21 +0000 (0:00:00.150) 0:00:21.952 ******** 2026-03-28 02:56:25.519741 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519751 | orchestrator | 2026-03-28 02:56:25.519761 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 02:56:25.519771 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.154) 0:00:22.107 ******** 2026-03-28 02:56:25.519780 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519790 | orchestrator | 2026-03-28 02:56:25.519800 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 02:56:25.519809 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.151) 0:00:22.259 ******** 2026-03-28 02:56:25.519819 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519828 | orchestrator | 2026-03-28 02:56:25.519838 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 02:56:25.519848 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.160) 0:00:22.420 ******** 2026-03-28 02:56:25.519857 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519867 | orchestrator | 2026-03-28 02:56:25.519876 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 02:56:25.519886 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.171) 0:00:22.591 ******** 2026-03-28 02:56:25.519896 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519905 | orchestrator | 2026-03-28 02:56:25.519915 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 02:56:25.519924 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.146) 0:00:22.738 ******** 2026-03-28 02:56:25.519934 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519944 | orchestrator | 2026-03-28 02:56:25.519953 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 02:56:25.519963 | orchestrator | Saturday 28 March 2026 02:56:22 +0000 (0:00:00.167) 0:00:22.906 ******** 2026-03-28 02:56:25.519988 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.519999 | orchestrator | 2026-03-28 02:56:25.520009 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 02:56:25.520036 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.367) 0:00:23.274 ******** 2026-03-28 02:56:25.520046 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520056 | orchestrator | 2026-03-28 02:56:25.520066 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 02:56:25.520075 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.157) 0:00:23.431 ******** 2026-03-28 02:56:25.520085 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520094 | orchestrator | 2026-03-28 02:56:25.520104 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 02:56:25.520114 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.150) 0:00:23.581 ******** 2026-03-28 02:56:25.520131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520141 | orchestrator | 2026-03-28 02:56:25.520151 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 02:56:25.520161 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.180) 0:00:23.761 ******** 2026-03-28 02:56:25.520170 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520180 | orchestrator | 2026-03-28 02:56:25.520195 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 02:56:25.520205 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.143) 0:00:23.905 ******** 2026-03-28 02:56:25.520215 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520225 | orchestrator | 2026-03-28 02:56:25.520234 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 02:56:25.520244 | orchestrator | Saturday 28 March 2026 02:56:23 +0000 (0:00:00.182) 0:00:24.088 ******** 2026-03-28 02:56:25.520253 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520263 | orchestrator | 2026-03-28 02:56:25.520273 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 02:56:25.520282 | orchestrator | Saturday 28 March 2026 02:56:24 +0000 (0:00:00.178) 0:00:24.267 ******** 2026-03-28 02:56:25.520293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:25.520306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:25.520315 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520325 | orchestrator | 2026-03-28 02:56:25.520335 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 02:56:25.520345 | orchestrator | Saturday 28 March 2026 02:56:24 +0000 (0:00:00.178) 0:00:24.445 ******** 2026-03-28 02:56:25.520355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:25.520364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:25.520374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520384 | orchestrator | 2026-03-28 02:56:25.520394 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 02:56:25.520404 | orchestrator | Saturday 28 March 2026 02:56:24 +0000 (0:00:00.220) 0:00:24.665 ******** 2026-03-28 02:56:25.520414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:25.520423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:25.520433 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520443 | orchestrator | 2026-03-28 02:56:25.520453 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 02:56:25.520462 | orchestrator | Saturday 28 March 2026 02:56:24 +0000 (0:00:00.160) 0:00:24.826 ******** 2026-03-28 02:56:25.520472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:25.520482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:25.520492 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520501 | orchestrator | 2026-03-28 02:56:25.520511 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 02:56:25.520521 | orchestrator | Saturday 28 March 2026 02:56:24 +0000 (0:00:00.162) 0:00:24.988 ******** 2026-03-28 02:56:25.520537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:25.520547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:25.520557 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:25.520566 | orchestrator | 2026-03-28 02:56:25.520576 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 02:56:25.520586 | orchestrator | Saturday 28 March 2026 02:56:25 +0000 (0:00:00.449) 0:00:25.438 ******** 2026-03-28 02:56:25.520602 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391420 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391428 | orchestrator | 2026-03-28 02:56:31.391434 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 02:56:31.391440 | orchestrator | Saturday 28 March 2026 02:56:25 +0000 (0:00:00.160) 0:00:25.599 ******** 2026-03-28 02:56:31.391445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391455 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391485 | orchestrator | 2026-03-28 02:56:31.391502 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 02:56:31.391507 | orchestrator | Saturday 28 March 2026 02:56:25 +0000 (0:00:00.172) 0:00:25.771 ******** 2026-03-28 02:56:31.391512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391521 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391525 | orchestrator | 2026-03-28 02:56:31.391529 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 02:56:31.391533 | orchestrator | Saturday 28 March 2026 02:56:25 +0000 (0:00:00.184) 0:00:25.955 ******** 2026-03-28 02:56:31.391537 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:31.391542 | orchestrator | 2026-03-28 02:56:31.391546 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 02:56:31.391550 | orchestrator | Saturday 28 March 2026 02:56:26 +0000 (0:00:00.575) 0:00:26.531 ******** 2026-03-28 02:56:31.391554 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:31.391558 | orchestrator | 2026-03-28 02:56:31.391562 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 02:56:31.391566 | orchestrator | Saturday 28 March 2026 02:56:26 +0000 (0:00:00.546) 0:00:27.078 ******** 2026-03-28 02:56:31.391569 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:56:31.391573 | orchestrator | 2026-03-28 02:56:31.391577 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 02:56:31.391582 | orchestrator | Saturday 28 March 2026 02:56:27 +0000 (0:00:00.182) 0:00:27.260 ******** 2026-03-28 02:56:31.391586 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'vg_name': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}) 2026-03-28 02:56:31.391591 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'vg_name': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}) 2026-03-28 02:56:31.391607 | orchestrator | 2026-03-28 02:56:31.391611 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 02:56:31.391615 | orchestrator | Saturday 28 March 2026 02:56:27 +0000 (0:00:00.198) 0:00:27.459 ******** 2026-03-28 02:56:31.391619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391627 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391631 | orchestrator | 2026-03-28 02:56:31.391635 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 02:56:31.391639 | orchestrator | Saturday 28 March 2026 02:56:27 +0000 (0:00:00.183) 0:00:27.643 ******** 2026-03-28 02:56:31.391643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391650 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391654 | orchestrator | 2026-03-28 02:56:31.391658 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 02:56:31.391662 | orchestrator | Saturday 28 March 2026 02:56:27 +0000 (0:00:00.166) 0:00:27.810 ******** 2026-03-28 02:56:31.391666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 02:56:31.391670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 02:56:31.391674 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:56:31.391677 | orchestrator | 2026-03-28 02:56:31.391681 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 02:56:31.391685 | orchestrator | Saturday 28 March 2026 02:56:27 +0000 (0:00:00.187) 0:00:27.998 ******** 2026-03-28 02:56:31.391698 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 02:56:31.391703 | orchestrator |  "lvm_report": { 2026-03-28 02:56:31.391707 | orchestrator |  "lv": [ 2026-03-28 02:56:31.391711 | orchestrator |  { 2026-03-28 02:56:31.391715 | orchestrator |  "lv_name": "osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb", 2026-03-28 02:56:31.391720 | orchestrator |  "vg_name": "ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb" 2026-03-28 02:56:31.391724 | orchestrator |  }, 2026-03-28 02:56:31.391727 | orchestrator |  { 2026-03-28 02:56:31.391731 | orchestrator |  "lv_name": "osd-block-e94d822c-120c-5920-885f-96546946f9a0", 2026-03-28 02:56:31.391735 | orchestrator |  "vg_name": "ceph-e94d822c-120c-5920-885f-96546946f9a0" 2026-03-28 02:56:31.391739 | orchestrator |  } 2026-03-28 02:56:31.391743 | orchestrator |  ], 2026-03-28 02:56:31.391748 | orchestrator |  "pv": [ 2026-03-28 02:56:31.391754 | orchestrator |  { 2026-03-28 02:56:31.391759 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 02:56:31.391766 | orchestrator |  "vg_name": "ceph-e94d822c-120c-5920-885f-96546946f9a0" 2026-03-28 02:56:31.391772 | orchestrator |  }, 2026-03-28 02:56:31.391778 | orchestrator |  { 2026-03-28 02:56:31.391788 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 02:56:31.391794 | orchestrator |  "vg_name": "ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb" 2026-03-28 02:56:31.391801 | orchestrator |  } 2026-03-28 02:56:31.391807 | orchestrator |  ] 2026-03-28 02:56:31.391813 | orchestrator |  } 2026-03-28 02:56:31.391819 | orchestrator | } 2026-03-28 02:56:31.391831 | orchestrator | 2026-03-28 02:56:31.391838 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 02:56:31.391844 | orchestrator | 2026-03-28 02:56:31.391848 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:56:31.391852 | orchestrator | Saturday 28 March 2026 02:56:28 +0000 (0:00:00.605) 0:00:28.604 ******** 2026-03-28 02:56:31.391856 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 02:56:31.391860 | orchestrator | 2026-03-28 02:56:31.391864 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:56:31.391868 | orchestrator | Saturday 28 March 2026 02:56:28 +0000 (0:00:00.287) 0:00:28.891 ******** 2026-03-28 02:56:31.391871 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:31.391875 | orchestrator | 2026-03-28 02:56:31.391879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.391883 | orchestrator | Saturday 28 March 2026 02:56:29 +0000 (0:00:00.239) 0:00:29.131 ******** 2026-03-28 02:56:31.391886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 02:56:31.391890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 02:56:31.391894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 02:56:31.391898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 02:56:31.391902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 02:56:31.391906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 02:56:31.391910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 02:56:31.391915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 02:56:31.391919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 02:56:31.391923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 02:56:31.391928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 02:56:31.391932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 02:56:31.391936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 02:56:31.391940 | orchestrator | 2026-03-28 02:56:31.391944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.391948 | orchestrator | Saturday 28 March 2026 02:56:29 +0000 (0:00:00.505) 0:00:29.637 ******** 2026-03-28 02:56:31.391953 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.391957 | orchestrator | 2026-03-28 02:56:31.391961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.391966 | orchestrator | Saturday 28 March 2026 02:56:29 +0000 (0:00:00.222) 0:00:29.860 ******** 2026-03-28 02:56:31.391970 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.391974 | orchestrator | 2026-03-28 02:56:31.391978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.391983 | orchestrator | Saturday 28 March 2026 02:56:30 +0000 (0:00:00.254) 0:00:30.114 ******** 2026-03-28 02:56:31.391987 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.391992 | orchestrator | 2026-03-28 02:56:31.391996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.392000 | orchestrator | Saturday 28 March 2026 02:56:30 +0000 (0:00:00.222) 0:00:30.337 ******** 2026-03-28 02:56:31.392005 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.392009 | orchestrator | 2026-03-28 02:56:31.392031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.392038 | orchestrator | Saturday 28 March 2026 02:56:30 +0000 (0:00:00.220) 0:00:30.558 ******** 2026-03-28 02:56:31.392046 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.392051 | orchestrator | 2026-03-28 02:56:31.392056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:31.392060 | orchestrator | Saturday 28 March 2026 02:56:30 +0000 (0:00:00.226) 0:00:30.785 ******** 2026-03-28 02:56:31.392064 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:31.392069 | orchestrator | 2026-03-28 02:56:31.392077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.491755 | orchestrator | Saturday 28 March 2026 02:56:31 +0000 (0:00:00.683) 0:00:31.468 ******** 2026-03-28 02:56:42.491889 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.491906 | orchestrator | 2026-03-28 02:56:42.491919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.491966 | orchestrator | Saturday 28 March 2026 02:56:31 +0000 (0:00:00.235) 0:00:31.703 ******** 2026-03-28 02:56:42.491978 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.491989 | orchestrator | 2026-03-28 02:56:42.491999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.492009 | orchestrator | Saturday 28 March 2026 02:56:31 +0000 (0:00:00.222) 0:00:31.926 ******** 2026-03-28 02:56:42.492061 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785) 2026-03-28 02:56:42.492074 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785) 2026-03-28 02:56:42.492088 | orchestrator | 2026-03-28 02:56:42.492122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.492138 | orchestrator | Saturday 28 March 2026 02:56:32 +0000 (0:00:00.490) 0:00:32.417 ******** 2026-03-28 02:56:42.492153 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4) 2026-03-28 02:56:42.492170 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4) 2026-03-28 02:56:42.492187 | orchestrator | 2026-03-28 02:56:42.492202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.492212 | orchestrator | Saturday 28 March 2026 02:56:32 +0000 (0:00:00.441) 0:00:32.858 ******** 2026-03-28 02:56:42.492222 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab) 2026-03-28 02:56:42.492232 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab) 2026-03-28 02:56:42.492241 | orchestrator | 2026-03-28 02:56:42.492251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.492260 | orchestrator | Saturday 28 March 2026 02:56:33 +0000 (0:00:00.487) 0:00:33.345 ******** 2026-03-28 02:56:42.492270 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7) 2026-03-28 02:56:42.492280 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7) 2026-03-28 02:56:42.492289 | orchestrator | 2026-03-28 02:56:42.492299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:56:42.492309 | orchestrator | Saturday 28 March 2026 02:56:33 +0000 (0:00:00.512) 0:00:33.858 ******** 2026-03-28 02:56:42.492319 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:56:42.492328 | orchestrator | 2026-03-28 02:56:42.492338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492348 | orchestrator | Saturday 28 March 2026 02:56:34 +0000 (0:00:00.370) 0:00:34.228 ******** 2026-03-28 02:56:42.492357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 02:56:42.492367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 02:56:42.492377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 02:56:42.492409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 02:56:42.492419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 02:56:42.492429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 02:56:42.492438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 02:56:42.492448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 02:56:42.492457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 02:56:42.492466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 02:56:42.492476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 02:56:42.492485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 02:56:42.492495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 02:56:42.492504 | orchestrator | 2026-03-28 02:56:42.492514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492523 | orchestrator | Saturday 28 March 2026 02:56:34 +0000 (0:00:00.467) 0:00:34.695 ******** 2026-03-28 02:56:42.492533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492542 | orchestrator | 2026-03-28 02:56:42.492551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492561 | orchestrator | Saturday 28 March 2026 02:56:34 +0000 (0:00:00.248) 0:00:34.944 ******** 2026-03-28 02:56:42.492570 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492580 | orchestrator | 2026-03-28 02:56:42.492589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492599 | orchestrator | Saturday 28 March 2026 02:56:35 +0000 (0:00:00.254) 0:00:35.199 ******** 2026-03-28 02:56:42.492609 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492618 | orchestrator | 2026-03-28 02:56:42.492646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492656 | orchestrator | Saturday 28 March 2026 02:56:35 +0000 (0:00:00.657) 0:00:35.856 ******** 2026-03-28 02:56:42.492666 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492675 | orchestrator | 2026-03-28 02:56:42.492685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492695 | orchestrator | Saturday 28 March 2026 02:56:36 +0000 (0:00:00.232) 0:00:36.089 ******** 2026-03-28 02:56:42.492704 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492714 | orchestrator | 2026-03-28 02:56:42.492723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492734 | orchestrator | Saturday 28 March 2026 02:56:36 +0000 (0:00:00.226) 0:00:36.315 ******** 2026-03-28 02:56:42.492743 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492754 | orchestrator | 2026-03-28 02:56:42.492764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492773 | orchestrator | Saturday 28 March 2026 02:56:36 +0000 (0:00:00.219) 0:00:36.534 ******** 2026-03-28 02:56:42.492789 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492799 | orchestrator | 2026-03-28 02:56:42.492808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492818 | orchestrator | Saturday 28 March 2026 02:56:36 +0000 (0:00:00.214) 0:00:36.749 ******** 2026-03-28 02:56:42.492828 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492837 | orchestrator | 2026-03-28 02:56:42.492847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492857 | orchestrator | Saturday 28 March 2026 02:56:36 +0000 (0:00:00.222) 0:00:36.972 ******** 2026-03-28 02:56:42.492866 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 02:56:42.492885 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 02:56:42.492895 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 02:56:42.492904 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 02:56:42.492914 | orchestrator | 2026-03-28 02:56:42.492924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492933 | orchestrator | Saturday 28 March 2026 02:56:37 +0000 (0:00:00.684) 0:00:37.657 ******** 2026-03-28 02:56:42.492943 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492952 | orchestrator | 2026-03-28 02:56:42.492962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.492971 | orchestrator | Saturday 28 March 2026 02:56:37 +0000 (0:00:00.207) 0:00:37.865 ******** 2026-03-28 02:56:42.492981 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.492990 | orchestrator | 2026-03-28 02:56:42.493000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.493010 | orchestrator | Saturday 28 March 2026 02:56:38 +0000 (0:00:00.225) 0:00:38.090 ******** 2026-03-28 02:56:42.493064 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.493075 | orchestrator | 2026-03-28 02:56:42.493085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:56:42.493095 | orchestrator | Saturday 28 March 2026 02:56:38 +0000 (0:00:00.222) 0:00:38.313 ******** 2026-03-28 02:56:42.493105 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.493115 | orchestrator | 2026-03-28 02:56:42.493124 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 02:56:42.493134 | orchestrator | Saturday 28 March 2026 02:56:38 +0000 (0:00:00.208) 0:00:38.521 ******** 2026-03-28 02:56:42.493143 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.493153 | orchestrator | 2026-03-28 02:56:42.493163 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 02:56:42.493172 | orchestrator | Saturday 28 March 2026 02:56:38 +0000 (0:00:00.374) 0:00:38.896 ******** 2026-03-28 02:56:42.493182 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '80a8d2d8-5d5c-5988-8f38-8985bde94181'}}) 2026-03-28 02:56:42.493192 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}}) 2026-03-28 02:56:42.493201 | orchestrator | 2026-03-28 02:56:42.493211 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 02:56:42.493221 | orchestrator | Saturday 28 March 2026 02:56:39 +0000 (0:00:00.233) 0:00:39.130 ******** 2026-03-28 02:56:42.493232 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}) 2026-03-28 02:56:42.493243 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}) 2026-03-28 02:56:42.493253 | orchestrator | 2026-03-28 02:56:42.493262 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 02:56:42.493272 | orchestrator | Saturday 28 March 2026 02:56:40 +0000 (0:00:01.932) 0:00:41.063 ******** 2026-03-28 02:56:42.493282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:42.493293 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:42.493346 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:42.493359 | orchestrator | 2026-03-28 02:56:42.493368 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 02:56:42.493378 | orchestrator | Saturday 28 March 2026 02:56:41 +0000 (0:00:00.176) 0:00:41.239 ******** 2026-03-28 02:56:42.493388 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}) 2026-03-28 02:56:42.493414 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}) 2026-03-28 02:56:48.582517 | orchestrator | 2026-03-28 02:56:48.582664 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 02:56:48.582683 | orchestrator | Saturday 28 March 2026 02:56:42 +0000 (0:00:01.325) 0:00:42.564 ******** 2026-03-28 02:56:48.582695 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.582710 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.582722 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.582734 | orchestrator | 2026-03-28 02:56:48.582768 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 02:56:48.582780 | orchestrator | Saturday 28 March 2026 02:56:42 +0000 (0:00:00.169) 0:00:42.734 ******** 2026-03-28 02:56:48.582791 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.582802 | orchestrator | 2026-03-28 02:56:48.582813 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 02:56:48.582824 | orchestrator | Saturday 28 March 2026 02:56:42 +0000 (0:00:00.150) 0:00:42.884 ******** 2026-03-28 02:56:48.582836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.582847 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.582859 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.582870 | orchestrator | 2026-03-28 02:56:48.582881 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 02:56:48.582892 | orchestrator | Saturday 28 March 2026 02:56:42 +0000 (0:00:00.169) 0:00:43.053 ******** 2026-03-28 02:56:48.582903 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.582914 | orchestrator | 2026-03-28 02:56:48.582925 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 02:56:48.582936 | orchestrator | Saturday 28 March 2026 02:56:43 +0000 (0:00:00.151) 0:00:43.205 ******** 2026-03-28 02:56:48.582947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.582959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.582970 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.582982 | orchestrator | 2026-03-28 02:56:48.582993 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 02:56:48.583004 | orchestrator | Saturday 28 March 2026 02:56:43 +0000 (0:00:00.179) 0:00:43.385 ******** 2026-03-28 02:56:48.583016 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583053 | orchestrator | 2026-03-28 02:56:48.583065 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 02:56:48.583078 | orchestrator | Saturday 28 March 2026 02:56:43 +0000 (0:00:00.157) 0:00:43.542 ******** 2026-03-28 02:56:48.583090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.583103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.583116 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583128 | orchestrator | 2026-03-28 02:56:48.583141 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 02:56:48.583179 | orchestrator | Saturday 28 March 2026 02:56:43 +0000 (0:00:00.157) 0:00:43.700 ******** 2026-03-28 02:56:48.583193 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:48.583207 | orchestrator | 2026-03-28 02:56:48.583219 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 02:56:48.583232 | orchestrator | Saturday 28 March 2026 02:56:43 +0000 (0:00:00.154) 0:00:43.854 ******** 2026-03-28 02:56:48.583245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.583258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.583270 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583283 | orchestrator | 2026-03-28 02:56:48.583295 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 02:56:48.583308 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.402) 0:00:44.256 ******** 2026-03-28 02:56:48.583320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.583333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.583346 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583358 | orchestrator | 2026-03-28 02:56:48.583371 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 02:56:48.583404 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.177) 0:00:44.434 ******** 2026-03-28 02:56:48.583416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:48.583427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:48.583439 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583449 | orchestrator | 2026-03-28 02:56:48.583460 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 02:56:48.583472 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.173) 0:00:44.607 ******** 2026-03-28 02:56:48.583489 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583500 | orchestrator | 2026-03-28 02:56:48.583511 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 02:56:48.583522 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.157) 0:00:44.764 ******** 2026-03-28 02:56:48.583533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583544 | orchestrator | 2026-03-28 02:56:48.583555 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 02:56:48.583566 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.154) 0:00:44.919 ******** 2026-03-28 02:56:48.583577 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.583587 | orchestrator | 2026-03-28 02:56:48.583598 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 02:56:48.583609 | orchestrator | Saturday 28 March 2026 02:56:44 +0000 (0:00:00.142) 0:00:45.061 ******** 2026-03-28 02:56:48.583621 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:56:48.583632 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 02:56:48.583643 | orchestrator | } 2026-03-28 02:56:48.583654 | orchestrator | 2026-03-28 02:56:48.583665 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 02:56:48.583677 | orchestrator | Saturday 28 March 2026 02:56:45 +0000 (0:00:00.177) 0:00:45.239 ******** 2026-03-28 02:56:48.583688 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:56:48.583698 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 02:56:48.583719 | orchestrator | } 2026-03-28 02:56:48.583730 | orchestrator | 2026-03-28 02:56:48.583741 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 02:56:48.583751 | orchestrator | Saturday 28 March 2026 02:56:45 +0000 (0:00:00.165) 0:00:45.404 ******** 2026-03-28 02:56:48.583762 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:56:48.583773 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 02:56:48.583784 | orchestrator | } 2026-03-28 02:56:48.583795 | orchestrator | 2026-03-28 02:56:48.583806 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 02:56:48.583817 | orchestrator | Saturday 28 March 2026 02:56:45 +0000 (0:00:00.164) 0:00:45.569 ******** 2026-03-28 02:56:48.583827 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:48.583838 | orchestrator | 2026-03-28 02:56:48.583849 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 02:56:48.583860 | orchestrator | Saturday 28 March 2026 02:56:46 +0000 (0:00:00.527) 0:00:46.096 ******** 2026-03-28 02:56:48.583871 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:48.583882 | orchestrator | 2026-03-28 02:56:48.583892 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 02:56:48.583903 | orchestrator | Saturday 28 March 2026 02:56:46 +0000 (0:00:00.559) 0:00:46.656 ******** 2026-03-28 02:56:48.583914 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:48.583925 | orchestrator | 2026-03-28 02:56:48.583936 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 02:56:48.583947 | orchestrator | Saturday 28 March 2026 02:56:47 +0000 (0:00:00.597) 0:00:47.253 ******** 2026-03-28 02:56:48.583957 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:48.583968 | orchestrator | 2026-03-28 02:56:48.583979 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 02:56:48.583990 | orchestrator | Saturday 28 March 2026 02:56:47 +0000 (0:00:00.397) 0:00:47.650 ******** 2026-03-28 02:56:48.584001 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584011 | orchestrator | 2026-03-28 02:56:48.584039 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 02:56:48.584051 | orchestrator | Saturday 28 March 2026 02:56:47 +0000 (0:00:00.130) 0:00:47.781 ******** 2026-03-28 02:56:48.584062 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584073 | orchestrator | 2026-03-28 02:56:48.584084 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 02:56:48.584096 | orchestrator | Saturday 28 March 2026 02:56:47 +0000 (0:00:00.125) 0:00:47.907 ******** 2026-03-28 02:56:48.584106 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:56:48.584117 | orchestrator |  "vgs_report": { 2026-03-28 02:56:48.584129 | orchestrator |  "vg": [] 2026-03-28 02:56:48.584141 | orchestrator |  } 2026-03-28 02:56:48.584152 | orchestrator | } 2026-03-28 02:56:48.584163 | orchestrator | 2026-03-28 02:56:48.584174 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 02:56:48.584185 | orchestrator | Saturday 28 March 2026 02:56:47 +0000 (0:00:00.152) 0:00:48.059 ******** 2026-03-28 02:56:48.584195 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584206 | orchestrator | 2026-03-28 02:56:48.584217 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 02:56:48.584228 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.149) 0:00:48.209 ******** 2026-03-28 02:56:48.584238 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584249 | orchestrator | 2026-03-28 02:56:48.584260 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 02:56:48.584270 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.150) 0:00:48.359 ******** 2026-03-28 02:56:48.584281 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584291 | orchestrator | 2026-03-28 02:56:48.584302 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 02:56:48.584313 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.150) 0:00:48.509 ******** 2026-03-28 02:56:48.584332 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:48.584343 | orchestrator | 2026-03-28 02:56:48.584360 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 02:56:53.736734 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.151) 0:00:48.661 ******** 2026-03-28 02:56:53.736849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.736864 | orchestrator | 2026-03-28 02:56:53.736877 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 02:56:53.736890 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.169) 0:00:48.830 ******** 2026-03-28 02:56:53.736901 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.736911 | orchestrator | 2026-03-28 02:56:53.736922 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 02:56:53.736934 | orchestrator | Saturday 28 March 2026 02:56:48 +0000 (0:00:00.141) 0:00:48.971 ******** 2026-03-28 02:56:53.736946 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.736953 | orchestrator | 2026-03-28 02:56:53.736981 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 02:56:53.736989 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.134) 0:00:49.105 ******** 2026-03-28 02:56:53.736996 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737002 | orchestrator | 2026-03-28 02:56:53.737010 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 02:56:53.737016 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.149) 0:00:49.255 ******** 2026-03-28 02:56:53.737110 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737119 | orchestrator | 2026-03-28 02:56:53.737126 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 02:56:53.737133 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.365) 0:00:49.621 ******** 2026-03-28 02:56:53.737140 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737147 | orchestrator | 2026-03-28 02:56:53.737154 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 02:56:53.737161 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.153) 0:00:49.775 ******** 2026-03-28 02:56:53.737168 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737175 | orchestrator | 2026-03-28 02:56:53.737182 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 02:56:53.737188 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.142) 0:00:49.918 ******** 2026-03-28 02:56:53.737195 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737202 | orchestrator | 2026-03-28 02:56:53.737208 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 02:56:53.737215 | orchestrator | Saturday 28 March 2026 02:56:49 +0000 (0:00:00.147) 0:00:50.065 ******** 2026-03-28 02:56:53.737222 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737228 | orchestrator | 2026-03-28 02:56:53.737235 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 02:56:53.737242 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.146) 0:00:50.212 ******** 2026-03-28 02:56:53.737249 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737256 | orchestrator | 2026-03-28 02:56:53.737264 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 02:56:53.737272 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.148) 0:00:50.360 ******** 2026-03-28 02:56:53.737281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737299 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737307 | orchestrator | 2026-03-28 02:56:53.737315 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 02:56:53.737349 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.202) 0:00:50.563 ******** 2026-03-28 02:56:53.737357 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737373 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737381 | orchestrator | 2026-03-28 02:56:53.737388 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 02:56:53.737394 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.166) 0:00:50.729 ******** 2026-03-28 02:56:53.737401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737415 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737423 | orchestrator | 2026-03-28 02:56:53.737430 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 02:56:53.737436 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.164) 0:00:50.894 ******** 2026-03-28 02:56:53.737443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737457 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737464 | orchestrator | 2026-03-28 02:56:53.737488 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 02:56:53.737495 | orchestrator | Saturday 28 March 2026 02:56:50 +0000 (0:00:00.184) 0:00:51.078 ******** 2026-03-28 02:56:53.737502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737516 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737522 | orchestrator | 2026-03-28 02:56:53.737534 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 02:56:53.737542 | orchestrator | Saturday 28 March 2026 02:56:51 +0000 (0:00:00.156) 0:00:51.235 ******** 2026-03-28 02:56:53.737549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737562 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737569 | orchestrator | 2026-03-28 02:56:53.737576 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 02:56:53.737582 | orchestrator | Saturday 28 March 2026 02:56:51 +0000 (0:00:00.165) 0:00:51.400 ******** 2026-03-28 02:56:53.737589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737603 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737620 | orchestrator | 2026-03-28 02:56:53.737626 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 02:56:53.737637 | orchestrator | Saturday 28 March 2026 02:56:51 +0000 (0:00:00.394) 0:00:51.795 ******** 2026-03-28 02:56:53.737647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737667 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737678 | orchestrator | 2026-03-28 02:56:53.737688 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 02:56:53.737699 | orchestrator | Saturday 28 March 2026 02:56:51 +0000 (0:00:00.160) 0:00:51.956 ******** 2026-03-28 02:56:53.737706 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:53.737712 | orchestrator | 2026-03-28 02:56:53.737718 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 02:56:53.737724 | orchestrator | Saturday 28 March 2026 02:56:52 +0000 (0:00:00.549) 0:00:52.505 ******** 2026-03-28 02:56:53.737730 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:53.737736 | orchestrator | 2026-03-28 02:56:53.737743 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 02:56:53.737749 | orchestrator | Saturday 28 March 2026 02:56:52 +0000 (0:00:00.541) 0:00:53.047 ******** 2026-03-28 02:56:53.737755 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:56:53.737761 | orchestrator | 2026-03-28 02:56:53.737767 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 02:56:53.737773 | orchestrator | Saturday 28 March 2026 02:56:53 +0000 (0:00:00.156) 0:00:53.203 ******** 2026-03-28 02:56:53.737779 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'vg_name': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}) 2026-03-28 02:56:53.737788 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'vg_name': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}) 2026-03-28 02:56:53.737794 | orchestrator | 2026-03-28 02:56:53.737800 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 02:56:53.737806 | orchestrator | Saturday 28 March 2026 02:56:53 +0000 (0:00:00.234) 0:00:53.437 ******** 2026-03-28 02:56:53.737812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:56:53.737825 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:56:53.737831 | orchestrator | 2026-03-28 02:56:53.737837 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 02:56:53.737843 | orchestrator | Saturday 28 March 2026 02:56:53 +0000 (0:00:00.196) 0:00:53.634 ******** 2026-03-28 02:56:53.737849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:56:53.737860 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:57:00.703845 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:57:00.703989 | orchestrator | 2026-03-28 02:57:00.704016 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 02:57:00.704070 | orchestrator | Saturday 28 March 2026 02:56:53 +0000 (0:00:00.180) 0:00:53.815 ******** 2026-03-28 02:57:00.704088 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 02:57:00.704152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 02:57:00.704170 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:57:00.704187 | orchestrator | 2026-03-28 02:57:00.704204 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 02:57:00.704220 | orchestrator | Saturday 28 March 2026 02:56:53 +0000 (0:00:00.162) 0:00:53.978 ******** 2026-03-28 02:57:00.704237 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 02:57:00.704254 | orchestrator |  "lvm_report": { 2026-03-28 02:57:00.704273 | orchestrator |  "lv": [ 2026-03-28 02:57:00.704290 | orchestrator |  { 2026-03-28 02:57:00.704307 | orchestrator |  "lv_name": "osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181", 2026-03-28 02:57:00.704324 | orchestrator |  "vg_name": "ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181" 2026-03-28 02:57:00.704341 | orchestrator |  }, 2026-03-28 02:57:00.704356 | orchestrator |  { 2026-03-28 02:57:00.704373 | orchestrator |  "lv_name": "osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41", 2026-03-28 02:57:00.704389 | orchestrator |  "vg_name": "ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41" 2026-03-28 02:57:00.704405 | orchestrator |  } 2026-03-28 02:57:00.704422 | orchestrator |  ], 2026-03-28 02:57:00.704438 | orchestrator |  "pv": [ 2026-03-28 02:57:00.704453 | orchestrator |  { 2026-03-28 02:57:00.704470 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 02:57:00.704487 | orchestrator |  "vg_name": "ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181" 2026-03-28 02:57:00.704505 | orchestrator |  }, 2026-03-28 02:57:00.704522 | orchestrator |  { 2026-03-28 02:57:00.704538 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 02:57:00.704554 | orchestrator |  "vg_name": "ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41" 2026-03-28 02:57:00.704569 | orchestrator |  } 2026-03-28 02:57:00.704586 | orchestrator |  ] 2026-03-28 02:57:00.704602 | orchestrator |  } 2026-03-28 02:57:00.704619 | orchestrator | } 2026-03-28 02:57:00.704635 | orchestrator | 2026-03-28 02:57:00.704652 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 02:57:00.704667 | orchestrator | 2026-03-28 02:57:00.704683 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 02:57:00.704697 | orchestrator | Saturday 28 March 2026 02:56:54 +0000 (0:00:00.310) 0:00:54.288 ******** 2026-03-28 02:57:00.704713 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 02:57:00.704729 | orchestrator | 2026-03-28 02:57:00.704745 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 02:57:00.704761 | orchestrator | Saturday 28 March 2026 02:56:54 +0000 (0:00:00.788) 0:00:55.076 ******** 2026-03-28 02:57:00.704777 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:00.704793 | orchestrator | 2026-03-28 02:57:00.704808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.704822 | orchestrator | Saturday 28 March 2026 02:56:55 +0000 (0:00:00.288) 0:00:55.364 ******** 2026-03-28 02:57:00.704838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 02:57:00.704854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 02:57:00.704869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 02:57:00.704885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 02:57:00.704901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 02:57:00.704916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 02:57:00.704932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 02:57:00.704966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 02:57:00.704984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 02:57:00.705002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 02:57:00.705018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 02:57:00.705093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 02:57:00.705111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 02:57:00.705127 | orchestrator | 2026-03-28 02:57:00.705144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705161 | orchestrator | Saturday 28 March 2026 02:56:55 +0000 (0:00:00.431) 0:00:55.796 ******** 2026-03-28 02:57:00.705179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705196 | orchestrator | 2026-03-28 02:57:00.705213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705231 | orchestrator | Saturday 28 March 2026 02:56:55 +0000 (0:00:00.216) 0:00:56.012 ******** 2026-03-28 02:57:00.705249 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705266 | orchestrator | 2026-03-28 02:57:00.705283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705327 | orchestrator | Saturday 28 March 2026 02:56:56 +0000 (0:00:00.227) 0:00:56.239 ******** 2026-03-28 02:57:00.705347 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705364 | orchestrator | 2026-03-28 02:57:00.705381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705397 | orchestrator | Saturday 28 March 2026 02:56:56 +0000 (0:00:00.215) 0:00:56.455 ******** 2026-03-28 02:57:00.705415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705433 | orchestrator | 2026-03-28 02:57:00.705451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705469 | orchestrator | Saturday 28 March 2026 02:56:56 +0000 (0:00:00.208) 0:00:56.664 ******** 2026-03-28 02:57:00.705486 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705502 | orchestrator | 2026-03-28 02:57:00.705519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705535 | orchestrator | Saturday 28 March 2026 02:56:56 +0000 (0:00:00.213) 0:00:56.877 ******** 2026-03-28 02:57:00.705551 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705566 | orchestrator | 2026-03-28 02:57:00.705581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705596 | orchestrator | Saturday 28 March 2026 02:56:57 +0000 (0:00:00.219) 0:00:57.097 ******** 2026-03-28 02:57:00.705611 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705626 | orchestrator | 2026-03-28 02:57:00.705642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705659 | orchestrator | Saturday 28 March 2026 02:56:57 +0000 (0:00:00.227) 0:00:57.324 ******** 2026-03-28 02:57:00.705673 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:00.705687 | orchestrator | 2026-03-28 02:57:00.705704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705719 | orchestrator | Saturday 28 March 2026 02:56:57 +0000 (0:00:00.198) 0:00:57.523 ******** 2026-03-28 02:57:00.705735 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f) 2026-03-28 02:57:00.705753 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f) 2026-03-28 02:57:00.705767 | orchestrator | 2026-03-28 02:57:00.705783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.705799 | orchestrator | Saturday 28 March 2026 02:56:58 +0000 (0:00:00.899) 0:00:58.423 ******** 2026-03-28 02:57:00.705943 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e) 2026-03-28 02:57:00.705995 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e) 2026-03-28 02:57:00.706011 | orchestrator | 2026-03-28 02:57:00.706173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.706193 | orchestrator | Saturday 28 March 2026 02:56:58 +0000 (0:00:00.511) 0:00:58.935 ******** 2026-03-28 02:57:00.706210 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8) 2026-03-28 02:57:00.706227 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8) 2026-03-28 02:57:00.706243 | orchestrator | 2026-03-28 02:57:00.706259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.706277 | orchestrator | Saturday 28 March 2026 02:56:59 +0000 (0:00:00.464) 0:00:59.400 ******** 2026-03-28 02:57:00.706295 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b) 2026-03-28 02:57:00.706313 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b) 2026-03-28 02:57:00.706329 | orchestrator | 2026-03-28 02:57:00.706346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 02:57:00.706362 | orchestrator | Saturday 28 March 2026 02:56:59 +0000 (0:00:00.490) 0:00:59.890 ******** 2026-03-28 02:57:00.706378 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 02:57:00.706394 | orchestrator | 2026-03-28 02:57:00.706410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:00.706426 | orchestrator | Saturday 28 March 2026 02:57:00 +0000 (0:00:00.419) 0:01:00.309 ******** 2026-03-28 02:57:00.706441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 02:57:00.706458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 02:57:00.706474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 02:57:00.706489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 02:57:00.706504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 02:57:00.706520 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 02:57:00.706535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 02:57:00.706550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 02:57:00.706565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 02:57:00.706580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 02:57:00.706595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 02:57:00.706630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 02:57:10.360804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 02:57:10.360892 | orchestrator | 2026-03-28 02:57:10.360905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.360914 | orchestrator | Saturday 28 March 2026 02:57:00 +0000 (0:00:00.465) 0:01:00.775 ******** 2026-03-28 02:57:10.360923 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.360932 | orchestrator | 2026-03-28 02:57:10.360940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.360961 | orchestrator | Saturday 28 March 2026 02:57:00 +0000 (0:00:00.230) 0:01:01.005 ******** 2026-03-28 02:57:10.360969 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.360995 | orchestrator | 2026-03-28 02:57:10.361004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361012 | orchestrator | Saturday 28 March 2026 02:57:01 +0000 (0:00:00.224) 0:01:01.230 ******** 2026-03-28 02:57:10.361020 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361062 | orchestrator | 2026-03-28 02:57:10.361072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361080 | orchestrator | Saturday 28 March 2026 02:57:01 +0000 (0:00:00.248) 0:01:01.478 ******** 2026-03-28 02:57:10.361088 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361095 | orchestrator | 2026-03-28 02:57:10.361103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361111 | orchestrator | Saturday 28 March 2026 02:57:01 +0000 (0:00:00.231) 0:01:01.709 ******** 2026-03-28 02:57:10.361119 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361127 | orchestrator | 2026-03-28 02:57:10.361135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361142 | orchestrator | Saturday 28 March 2026 02:57:02 +0000 (0:00:00.718) 0:01:02.428 ******** 2026-03-28 02:57:10.361150 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361158 | orchestrator | 2026-03-28 02:57:10.361166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361174 | orchestrator | Saturday 28 March 2026 02:57:02 +0000 (0:00:00.238) 0:01:02.666 ******** 2026-03-28 02:57:10.361181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361189 | orchestrator | 2026-03-28 02:57:10.361197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361205 | orchestrator | Saturday 28 March 2026 02:57:02 +0000 (0:00:00.223) 0:01:02.889 ******** 2026-03-28 02:57:10.361213 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361220 | orchestrator | 2026-03-28 02:57:10.361228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361236 | orchestrator | Saturday 28 March 2026 02:57:03 +0000 (0:00:00.261) 0:01:03.151 ******** 2026-03-28 02:57:10.361244 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 02:57:10.361252 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 02:57:10.361260 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 02:57:10.361268 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 02:57:10.361276 | orchestrator | 2026-03-28 02:57:10.361284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361291 | orchestrator | Saturday 28 March 2026 02:57:03 +0000 (0:00:00.755) 0:01:03.906 ******** 2026-03-28 02:57:10.361299 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361307 | orchestrator | 2026-03-28 02:57:10.361315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361323 | orchestrator | Saturday 28 March 2026 02:57:04 +0000 (0:00:00.225) 0:01:04.131 ******** 2026-03-28 02:57:10.361330 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361338 | orchestrator | 2026-03-28 02:57:10.361346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361353 | orchestrator | Saturday 28 March 2026 02:57:04 +0000 (0:00:00.235) 0:01:04.367 ******** 2026-03-28 02:57:10.361362 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361371 | orchestrator | 2026-03-28 02:57:10.361380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 02:57:10.361388 | orchestrator | Saturday 28 March 2026 02:57:04 +0000 (0:00:00.294) 0:01:04.662 ******** 2026-03-28 02:57:10.361397 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361405 | orchestrator | 2026-03-28 02:57:10.361415 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 02:57:10.361424 | orchestrator | Saturday 28 March 2026 02:57:04 +0000 (0:00:00.220) 0:01:04.882 ******** 2026-03-28 02:57:10.361432 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361441 | orchestrator | 2026-03-28 02:57:10.361457 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 02:57:10.361466 | orchestrator | Saturday 28 March 2026 02:57:04 +0000 (0:00:00.151) 0:01:05.034 ******** 2026-03-28 02:57:10.361475 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}}) 2026-03-28 02:57:10.361485 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e38c52ab-9b1d-5b26-b141-c51106128b29'}}) 2026-03-28 02:57:10.361495 | orchestrator | 2026-03-28 02:57:10.361504 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 02:57:10.361513 | orchestrator | Saturday 28 March 2026 02:57:05 +0000 (0:00:00.282) 0:01:05.316 ******** 2026-03-28 02:57:10.361522 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}) 2026-03-28 02:57:10.361532 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}) 2026-03-28 02:57:10.361541 | orchestrator | 2026-03-28 02:57:10.361551 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 02:57:10.361574 | orchestrator | Saturday 28 March 2026 02:57:07 +0000 (0:00:01.893) 0:01:07.209 ******** 2026-03-28 02:57:10.361583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:10.361592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:10.361601 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361609 | orchestrator | 2026-03-28 02:57:10.361622 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 02:57:10.361630 | orchestrator | Saturday 28 March 2026 02:57:07 +0000 (0:00:00.394) 0:01:07.604 ******** 2026-03-28 02:57:10.361638 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}) 2026-03-28 02:57:10.361646 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}) 2026-03-28 02:57:10.361654 | orchestrator | 2026-03-28 02:57:10.361662 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 02:57:10.361669 | orchestrator | Saturday 28 March 2026 02:57:08 +0000 (0:00:01.392) 0:01:08.997 ******** 2026-03-28 02:57:10.361677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:10.361685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:10.361694 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361701 | orchestrator | 2026-03-28 02:57:10.361709 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 02:57:10.361717 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.163) 0:01:09.160 ******** 2026-03-28 02:57:10.361725 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361733 | orchestrator | 2026-03-28 02:57:10.361740 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 02:57:10.361748 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.149) 0:01:09.310 ******** 2026-03-28 02:57:10.361756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:10.361765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:10.361778 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361786 | orchestrator | 2026-03-28 02:57:10.361794 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 02:57:10.361802 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.154) 0:01:09.464 ******** 2026-03-28 02:57:10.361810 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361818 | orchestrator | 2026-03-28 02:57:10.361826 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 02:57:10.361834 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.153) 0:01:09.618 ******** 2026-03-28 02:57:10.361841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:10.361849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:10.361858 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361865 | orchestrator | 2026-03-28 02:57:10.361873 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 02:57:10.361881 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.175) 0:01:09.794 ******** 2026-03-28 02:57:10.361889 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361897 | orchestrator | 2026-03-28 02:57:10.361905 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 02:57:10.361912 | orchestrator | Saturday 28 March 2026 02:57:09 +0000 (0:00:00.147) 0:01:09.942 ******** 2026-03-28 02:57:10.361920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:10.361928 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:10.361936 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:10.361944 | orchestrator | 2026-03-28 02:57:10.361952 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 02:57:10.361960 | orchestrator | Saturday 28 March 2026 02:57:10 +0000 (0:00:00.157) 0:01:10.099 ******** 2026-03-28 02:57:10.361968 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:10.361976 | orchestrator | 2026-03-28 02:57:10.361983 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 02:57:10.361991 | orchestrator | Saturday 28 March 2026 02:57:10 +0000 (0:00:00.149) 0:01:10.249 ******** 2026-03-28 02:57:10.362004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:17.252697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:17.252867 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.252895 | orchestrator | 2026-03-28 02:57:17.252914 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 02:57:17.252934 | orchestrator | Saturday 28 March 2026 02:57:10 +0000 (0:00:00.191) 0:01:10.440 ******** 2026-03-28 02:57:17.252977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:17.252996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:17.253013 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.253058 | orchestrator | 2026-03-28 02:57:17.253076 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 02:57:17.253093 | orchestrator | Saturday 28 March 2026 02:57:10 +0000 (0:00:00.177) 0:01:10.617 ******** 2026-03-28 02:57:17.253145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:17.253163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:17.253179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.253196 | orchestrator | 2026-03-28 02:57:17.253212 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 02:57:17.253229 | orchestrator | Saturday 28 March 2026 02:57:10 +0000 (0:00:00.400) 0:01:11.018 ******** 2026-03-28 02:57:17.253246 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.253263 | orchestrator | 2026-03-28 02:57:17.253280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 02:57:17.253297 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.139) 0:01:11.157 ******** 2026-03-28 02:57:17.253313 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.253332 | orchestrator | 2026-03-28 02:57:17.253350 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 02:57:17.253367 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.144) 0:01:11.301 ******** 2026-03-28 02:57:17.253383 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.253399 | orchestrator | 2026-03-28 02:57:17.253415 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 02:57:17.253431 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.143) 0:01:11.445 ******** 2026-03-28 02:57:17.253447 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:57:17.253464 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 02:57:17.253480 | orchestrator | } 2026-03-28 02:57:17.253497 | orchestrator | 2026-03-28 02:57:17.253513 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 02:57:17.253530 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.166) 0:01:11.611 ******** 2026-03-28 02:57:17.253546 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:57:17.253562 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 02:57:17.253577 | orchestrator | } 2026-03-28 02:57:17.253593 | orchestrator | 2026-03-28 02:57:17.253610 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 02:57:17.253625 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.164) 0:01:11.775 ******** 2026-03-28 02:57:17.253641 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:57:17.253656 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 02:57:17.253672 | orchestrator | } 2026-03-28 02:57:17.253688 | orchestrator | 2026-03-28 02:57:17.253704 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 02:57:17.253720 | orchestrator | Saturday 28 March 2026 02:57:11 +0000 (0:00:00.162) 0:01:11.938 ******** 2026-03-28 02:57:17.253737 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:17.253753 | orchestrator | 2026-03-28 02:57:17.253769 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 02:57:17.253786 | orchestrator | Saturday 28 March 2026 02:57:12 +0000 (0:00:00.557) 0:01:12.496 ******** 2026-03-28 02:57:17.253802 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:17.253818 | orchestrator | 2026-03-28 02:57:17.253833 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 02:57:17.253850 | orchestrator | Saturday 28 March 2026 02:57:12 +0000 (0:00:00.558) 0:01:13.055 ******** 2026-03-28 02:57:17.253865 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:17.253881 | orchestrator | 2026-03-28 02:57:17.253897 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 02:57:17.253912 | orchestrator | Saturday 28 March 2026 02:57:13 +0000 (0:00:00.560) 0:01:13.615 ******** 2026-03-28 02:57:17.253927 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:17.253943 | orchestrator | 2026-03-28 02:57:17.253960 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 02:57:17.253989 | orchestrator | Saturday 28 March 2026 02:57:13 +0000 (0:00:00.177) 0:01:13.793 ******** 2026-03-28 02:57:17.254005 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254123 | orchestrator | 2026-03-28 02:57:17.254145 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 02:57:17.254162 | orchestrator | Saturday 28 March 2026 02:57:13 +0000 (0:00:00.126) 0:01:13.919 ******** 2026-03-28 02:57:17.254179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254194 | orchestrator | 2026-03-28 02:57:17.254210 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 02:57:17.254225 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.380) 0:01:14.299 ******** 2026-03-28 02:57:17.254242 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:57:17.254257 | orchestrator |  "vgs_report": { 2026-03-28 02:57:17.254275 | orchestrator |  "vg": [] 2026-03-28 02:57:17.254322 | orchestrator |  } 2026-03-28 02:57:17.254340 | orchestrator | } 2026-03-28 02:57:17.254355 | orchestrator | 2026-03-28 02:57:17.254371 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 02:57:17.254386 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.163) 0:01:14.463 ******** 2026-03-28 02:57:17.254404 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254419 | orchestrator | 2026-03-28 02:57:17.254435 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 02:57:17.254450 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.153) 0:01:14.616 ******** 2026-03-28 02:57:17.254478 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254494 | orchestrator | 2026-03-28 02:57:17.254511 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 02:57:17.254526 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.151) 0:01:14.768 ******** 2026-03-28 02:57:17.254542 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254558 | orchestrator | 2026-03-28 02:57:17.254574 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 02:57:17.254590 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.160) 0:01:14.928 ******** 2026-03-28 02:57:17.254605 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254621 | orchestrator | 2026-03-28 02:57:17.254637 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 02:57:17.254653 | orchestrator | Saturday 28 March 2026 02:57:14 +0000 (0:00:00.151) 0:01:15.080 ******** 2026-03-28 02:57:17.254668 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254684 | orchestrator | 2026-03-28 02:57:17.254700 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 02:57:17.254716 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.137) 0:01:15.218 ******** 2026-03-28 02:57:17.254732 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254749 | orchestrator | 2026-03-28 02:57:17.254766 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 02:57:17.254781 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.137) 0:01:15.355 ******** 2026-03-28 02:57:17.254799 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254814 | orchestrator | 2026-03-28 02:57:17.254830 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 02:57:17.254846 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.167) 0:01:15.523 ******** 2026-03-28 02:57:17.254863 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254880 | orchestrator | 2026-03-28 02:57:17.254897 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 02:57:17.254912 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.151) 0:01:15.675 ******** 2026-03-28 02:57:17.254929 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.254945 | orchestrator | 2026-03-28 02:57:17.254960 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 02:57:17.254976 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.157) 0:01:15.833 ******** 2026-03-28 02:57:17.255005 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255023 | orchestrator | 2026-03-28 02:57:17.255122 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 02:57:17.255138 | orchestrator | Saturday 28 March 2026 02:57:15 +0000 (0:00:00.148) 0:01:15.981 ******** 2026-03-28 02:57:17.255155 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255171 | orchestrator | 2026-03-28 02:57:17.255187 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 02:57:17.255205 | orchestrator | Saturday 28 March 2026 02:57:16 +0000 (0:00:00.383) 0:01:16.364 ******** 2026-03-28 02:57:17.255221 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255236 | orchestrator | 2026-03-28 02:57:17.255252 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 02:57:17.255267 | orchestrator | Saturday 28 March 2026 02:57:16 +0000 (0:00:00.159) 0:01:16.524 ******** 2026-03-28 02:57:17.255283 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255300 | orchestrator | 2026-03-28 02:57:17.255316 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 02:57:17.255332 | orchestrator | Saturday 28 March 2026 02:57:16 +0000 (0:00:00.161) 0:01:16.686 ******** 2026-03-28 02:57:17.255347 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255363 | orchestrator | 2026-03-28 02:57:17.255378 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 02:57:17.255394 | orchestrator | Saturday 28 March 2026 02:57:16 +0000 (0:00:00.152) 0:01:16.839 ******** 2026-03-28 02:57:17.255411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:17.255429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:17.255446 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255463 | orchestrator | 2026-03-28 02:57:17.255479 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 02:57:17.255496 | orchestrator | Saturday 28 March 2026 02:57:16 +0000 (0:00:00.167) 0:01:17.006 ******** 2026-03-28 02:57:17.255512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:17.255528 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:17.255545 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:17.255560 | orchestrator | 2026-03-28 02:57:17.255576 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 02:57:17.255591 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.172) 0:01:17.179 ******** 2026-03-28 02:57:17.255623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.569816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.569961 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570000 | orchestrator | 2026-03-28 02:57:20.570190 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 02:57:20.570207 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.152) 0:01:17.332 ******** 2026-03-28 02:57:20.570217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570255 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570264 | orchestrator | 2026-03-28 02:57:20.570272 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 02:57:20.570317 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.188) 0:01:17.520 ******** 2026-03-28 02:57:20.570325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570344 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570353 | orchestrator | 2026-03-28 02:57:20.570363 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 02:57:20.570372 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.173) 0:01:17.694 ******** 2026-03-28 02:57:20.570381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570390 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570400 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570409 | orchestrator | 2026-03-28 02:57:20.570418 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 02:57:20.570437 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.167) 0:01:17.861 ******** 2026-03-28 02:57:20.570446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570463 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570472 | orchestrator | 2026-03-28 02:57:20.570480 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 02:57:20.570488 | orchestrator | Saturday 28 March 2026 02:57:17 +0000 (0:00:00.182) 0:01:18.044 ******** 2026-03-28 02:57:20.570497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570514 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570522 | orchestrator | 2026-03-28 02:57:20.570530 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 02:57:20.570538 | orchestrator | Saturday 28 March 2026 02:57:18 +0000 (0:00:00.164) 0:01:18.208 ******** 2026-03-28 02:57:20.570547 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:20.570555 | orchestrator | 2026-03-28 02:57:20.570564 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 02:57:20.570572 | orchestrator | Saturday 28 March 2026 02:57:18 +0000 (0:00:00.768) 0:01:18.976 ******** 2026-03-28 02:57:20.570580 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:20.570588 | orchestrator | 2026-03-28 02:57:20.570597 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 02:57:20.570605 | orchestrator | Saturday 28 March 2026 02:57:19 +0000 (0:00:00.591) 0:01:19.568 ******** 2026-03-28 02:57:20.570614 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:20.570622 | orchestrator | 2026-03-28 02:57:20.570630 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 02:57:20.570639 | orchestrator | Saturday 28 March 2026 02:57:19 +0000 (0:00:00.179) 0:01:19.748 ******** 2026-03-28 02:57:20.570665 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'vg_name': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}) 2026-03-28 02:57:20.570675 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'vg_name': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}) 2026-03-28 02:57:20.570684 | orchestrator | 2026-03-28 02:57:20.570692 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 02:57:20.570700 | orchestrator | Saturday 28 March 2026 02:57:19 +0000 (0:00:00.192) 0:01:19.940 ******** 2026-03-28 02:57:20.570727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570751 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570759 | orchestrator | 2026-03-28 02:57:20.570768 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 02:57:20.570776 | orchestrator | Saturday 28 March 2026 02:57:20 +0000 (0:00:00.178) 0:01:20.119 ******** 2026-03-28 02:57:20.570784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570801 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570810 | orchestrator | 2026-03-28 02:57:20.570818 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 02:57:20.570826 | orchestrator | Saturday 28 March 2026 02:57:20 +0000 (0:00:00.170) 0:01:20.289 ******** 2026-03-28 02:57:20.570835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 02:57:20.570843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 02:57:20.570851 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:20.570860 | orchestrator | 2026-03-28 02:57:20.570868 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 02:57:20.570876 | orchestrator | Saturday 28 March 2026 02:57:20 +0000 (0:00:00.167) 0:01:20.457 ******** 2026-03-28 02:57:20.570885 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 02:57:20.570893 | orchestrator |  "lvm_report": { 2026-03-28 02:57:20.570903 | orchestrator |  "lv": [ 2026-03-28 02:57:20.570911 | orchestrator |  { 2026-03-28 02:57:20.570920 | orchestrator |  "lv_name": "osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5", 2026-03-28 02:57:20.570930 | orchestrator |  "vg_name": "ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5" 2026-03-28 02:57:20.570938 | orchestrator |  }, 2026-03-28 02:57:20.570947 | orchestrator |  { 2026-03-28 02:57:20.570955 | orchestrator |  "lv_name": "osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29", 2026-03-28 02:57:20.570964 | orchestrator |  "vg_name": "ceph-e38c52ab-9b1d-5b26-b141-c51106128b29" 2026-03-28 02:57:20.570973 | orchestrator |  } 2026-03-28 02:57:20.570982 | orchestrator |  ], 2026-03-28 02:57:20.570991 | orchestrator |  "pv": [ 2026-03-28 02:57:20.571000 | orchestrator |  { 2026-03-28 02:57:20.571008 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 02:57:20.571016 | orchestrator |  "vg_name": "ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5" 2026-03-28 02:57:20.571045 | orchestrator |  }, 2026-03-28 02:57:20.571054 | orchestrator |  { 2026-03-28 02:57:20.571062 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 02:57:20.571084 | orchestrator |  "vg_name": "ceph-e38c52ab-9b1d-5b26-b141-c51106128b29" 2026-03-28 02:57:20.571093 | orchestrator |  } 2026-03-28 02:57:20.571102 | orchestrator |  ] 2026-03-28 02:57:20.571110 | orchestrator |  } 2026-03-28 02:57:20.571119 | orchestrator | } 2026-03-28 02:57:20.571129 | orchestrator | 2026-03-28 02:57:20.571138 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:57:20.571147 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 02:57:20.571155 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 02:57:20.571164 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 02:57:20.571172 | orchestrator | 2026-03-28 02:57:20.571181 | orchestrator | 2026-03-28 02:57:20.571190 | orchestrator | 2026-03-28 02:57:20.571198 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:57:20.571207 | orchestrator | Saturday 28 March 2026 02:57:20 +0000 (0:00:00.166) 0:01:20.623 ******** 2026-03-28 02:57:20.571216 | orchestrator | =============================================================================== 2026-03-28 02:57:20.571224 | orchestrator | Create block VGs -------------------------------------------------------- 5.87s 2026-03-28 02:57:20.571233 | orchestrator | Create block LVs -------------------------------------------------------- 4.35s 2026-03-28 02:57:20.571241 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.89s 2026-03-28 02:57:20.571250 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.79s 2026-03-28 02:57:20.571258 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.73s 2026-03-28 02:57:20.571267 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.68s 2026-03-28 02:57:20.571275 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2026-03-28 02:57:20.571284 | orchestrator | Add known links to the list of available block devices ------------------ 1.58s 2026-03-28 02:57:20.571298 | orchestrator | Add known partitions to the list of available block devices ------------- 1.41s 2026-03-28 02:57:20.966004 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.35s 2026-03-28 02:57:20.966291 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2026-03-28 02:57:20.966320 | orchestrator | Print LVM report data --------------------------------------------------- 1.08s 2026-03-28 02:57:20.966374 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.97s 2026-03-28 02:57:20.966396 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.96s 2026-03-28 02:57:20.966413 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-03-28 02:57:20.966433 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-03-28 02:57:20.966451 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-03-28 02:57:20.966469 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-28 02:57:20.966482 | orchestrator | Create WAL LVs for ceph_db_wal_devices ---------------------------------- 0.78s 2026-03-28 02:57:20.966493 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.76s 2026-03-28 02:57:33.563012 | orchestrator | 2026-03-28 02:57:33 | INFO  | Task b7d3ba65-b2cd-4003-8445-91a9e52465a4 (facts) was prepared for execution. 2026-03-28 02:57:33.563210 | orchestrator | 2026-03-28 02:57:33 | INFO  | It takes a moment until task b7d3ba65-b2cd-4003-8445-91a9e52465a4 (facts) has been started and output is visible here. 2026-03-28 02:57:47.869213 | orchestrator | 2026-03-28 02:57:47.869309 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 02:57:47.869351 | orchestrator | 2026-03-28 02:57:47.869361 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 02:57:47.869369 | orchestrator | Saturday 28 March 2026 02:57:37 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-03-28 02:57:47.869378 | orchestrator | ok: [testbed-manager] 2026-03-28 02:57:47.869387 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:57:47.869395 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:57:47.869403 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:57:47.869410 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:57:47.869418 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:57:47.869426 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:47.869434 | orchestrator | 2026-03-28 02:57:47.869442 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 02:57:47.869450 | orchestrator | Saturday 28 March 2026 02:57:39 +0000 (0:00:01.166) 0:00:01.441 ******** 2026-03-28 02:57:47.869458 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:57:47.869466 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:57:47.869475 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:57:47.869482 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:57:47.869490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:57:47.869498 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:57:47.869506 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:47.869514 | orchestrator | 2026-03-28 02:57:47.869521 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 02:57:47.869529 | orchestrator | 2026-03-28 02:57:47.869537 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 02:57:47.869545 | orchestrator | Saturday 28 March 2026 02:57:40 +0000 (0:00:01.408) 0:00:02.849 ******** 2026-03-28 02:57:47.869553 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:57:47.869561 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:57:47.869569 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:57:47.869577 | orchestrator | ok: [testbed-manager] 2026-03-28 02:57:47.869585 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:57:47.869592 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:57:47.869600 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:57:47.869608 | orchestrator | 2026-03-28 02:57:47.869616 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 02:57:47.869624 | orchestrator | 2026-03-28 02:57:47.869632 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 02:57:47.869640 | orchestrator | Saturday 28 March 2026 02:57:46 +0000 (0:00:06.248) 0:00:09.097 ******** 2026-03-28 02:57:47.869647 | orchestrator | skipping: [testbed-manager] 2026-03-28 02:57:47.869655 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:57:47.869663 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:57:47.869671 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:57:47.869679 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:57:47.869687 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:57:47.869694 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:57:47.869702 | orchestrator | 2026-03-28 02:57:47.869710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 02:57:47.869718 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869728 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869736 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869744 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869752 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869767 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869776 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 02:57:47.869783 | orchestrator | 2026-03-28 02:57:47.869791 | orchestrator | 2026-03-28 02:57:47.869799 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 02:57:47.869821 | orchestrator | Saturday 28 March 2026 02:57:47 +0000 (0:00:00.586) 0:00:09.684 ******** 2026-03-28 02:57:47.869830 | orchestrator | =============================================================================== 2026-03-28 02:57:47.869837 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.25s 2026-03-28 02:57:47.869845 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2026-03-28 02:57:47.869853 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-03-28 02:57:47.869861 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-28 02:57:50.312384 | orchestrator | 2026-03-28 02:57:50 | INFO  | Task 3f2207fd-af83-4e1a-a9f5-736da8640b9a (ceph) was prepared for execution. 2026-03-28 02:57:50.312467 | orchestrator | 2026-03-28 02:57:50 | INFO  | It takes a moment until task 3f2207fd-af83-4e1a-a9f5-736da8640b9a (ceph) has been started and output is visible here. 2026-03-28 02:58:09.511849 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 02:58:09.511962 | orchestrator | 2.16.14 2026-03-28 02:58:09.511980 | orchestrator | 2026-03-28 02:58:09.511993 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-28 02:58:09.512032 | orchestrator | 2026-03-28 02:58:09.512044 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 02:58:09.512055 | orchestrator | Saturday 28 March 2026 02:57:55 +0000 (0:00:00.872) 0:00:00.872 ******** 2026-03-28 02:58:09.512068 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:58:09.512080 | orchestrator | 2026-03-28 02:58:09.512091 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 02:58:09.512102 | orchestrator | Saturday 28 March 2026 02:57:56 +0000 (0:00:01.240) 0:00:02.113 ******** 2026-03-28 02:58:09.512113 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512124 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512135 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512146 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512157 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512167 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512179 | orchestrator | 2026-03-28 02:58:09.512190 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 02:58:09.512201 | orchestrator | Saturday 28 March 2026 02:57:58 +0000 (0:00:01.403) 0:00:03.517 ******** 2026-03-28 02:58:09.512212 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512223 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512234 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512245 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512255 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512266 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512277 | orchestrator | 2026-03-28 02:58:09.512288 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 02:58:09.512313 | orchestrator | Saturday 28 March 2026 02:57:59 +0000 (0:00:00.801) 0:00:04.318 ******** 2026-03-28 02:58:09.512335 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512346 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512357 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512368 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512405 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512419 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512431 | orchestrator | 2026-03-28 02:58:09.512444 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 02:58:09.512457 | orchestrator | Saturday 28 March 2026 02:58:00 +0000 (0:00:00.953) 0:00:05.272 ******** 2026-03-28 02:58:09.512469 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512481 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512494 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512507 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512520 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512533 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512547 | orchestrator | 2026-03-28 02:58:09.512559 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 02:58:09.512572 | orchestrator | Saturday 28 March 2026 02:58:00 +0000 (0:00:00.866) 0:00:06.138 ******** 2026-03-28 02:58:09.512584 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512596 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512608 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512620 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512633 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512645 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512657 | orchestrator | 2026-03-28 02:58:09.512670 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 02:58:09.512684 | orchestrator | Saturday 28 March 2026 02:58:01 +0000 (0:00:00.835) 0:00:06.974 ******** 2026-03-28 02:58:09.512696 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512709 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.512721 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.512734 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.512746 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.512758 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.512772 | orchestrator | 2026-03-28 02:58:09.512784 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 02:58:09.512797 | orchestrator | Saturday 28 March 2026 02:58:02 +0000 (0:00:00.852) 0:00:07.826 ******** 2026-03-28 02:58:09.512808 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:09.512820 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:09.512831 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:09.512842 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:09.512853 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:09.512864 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:09.512875 | orchestrator | 2026-03-28 02:58:09.512913 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 02:58:09.512936 | orchestrator | Saturday 28 March 2026 02:58:03 +0000 (0:00:00.628) 0:00:08.455 ******** 2026-03-28 02:58:09.512959 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.512982 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.513077 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.513092 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.513103 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.513129 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.513140 | orchestrator | 2026-03-28 02:58:09.513151 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 02:58:09.513162 | orchestrator | Saturday 28 March 2026 02:58:04 +0000 (0:00:00.964) 0:00:09.419 ******** 2026-03-28 02:58:09.513173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 02:58:09.513184 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 02:58:09.513195 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 02:58:09.513205 | orchestrator | 2026-03-28 02:58:09.513216 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 02:58:09.513227 | orchestrator | Saturday 28 March 2026 02:58:04 +0000 (0:00:00.657) 0:00:10.076 ******** 2026-03-28 02:58:09.513247 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:09.513258 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:09.513269 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:09.513298 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:09.513310 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:09.513321 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:09.513332 | orchestrator | 2026-03-28 02:58:09.513343 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 02:58:09.513354 | orchestrator | Saturday 28 March 2026 02:58:05 +0000 (0:00:00.745) 0:00:10.822 ******** 2026-03-28 02:58:09.513365 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 02:58:09.513376 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 02:58:09.513387 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 02:58:09.513398 | orchestrator | 2026-03-28 02:58:09.513409 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 02:58:09.513419 | orchestrator | Saturday 28 March 2026 02:58:08 +0000 (0:00:02.442) 0:00:13.264 ******** 2026-03-28 02:58:09.513430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 02:58:09.513442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 02:58:09.513453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 02:58:09.513464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:09.513474 | orchestrator | 2026-03-28 02:58:09.513485 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 02:58:09.513496 | orchestrator | Saturday 28 March 2026 02:58:08 +0000 (0:00:00.426) 0:00:13.691 ******** 2026-03-28 02:58:09.513509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513545 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:09.513556 | orchestrator | 2026-03-28 02:58:09.513567 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 02:58:09.513578 | orchestrator | Saturday 28 March 2026 02:58:09 +0000 (0:00:00.609) 0:00:14.300 ******** 2026-03-28 02:58:09.513591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:09.513635 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:09.513646 | orchestrator | 2026-03-28 02:58:09.513662 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 02:58:09.513674 | orchestrator | Saturday 28 March 2026 02:58:09 +0000 (0:00:00.180) 0:00:14.480 ******** 2026-03-28 02:58:09.513695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 02:58:06.549664', 'end': '2026-03-28 02:58:06.597961', 'delta': '0:00:00.048297', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 02:58:19.743923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 02:58:07.136002', 'end': '2026-03-28 02:58:07.180712', 'delta': '0:00:00.044710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 02:58:19.744089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 02:58:07.717741', 'end': '2026-03-28 02:58:07.771084', 'delta': '0:00:00.053343', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 02:58:19.744109 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744123 | orchestrator | 2026-03-28 02:58:19.744135 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 02:58:19.744146 | orchestrator | Saturday 28 March 2026 02:58:09 +0000 (0:00:00.181) 0:00:14.661 ******** 2026-03-28 02:58:19.744156 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:19.744167 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:19.744176 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:19.744186 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:19.744196 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:19.744206 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:19.744215 | orchestrator | 2026-03-28 02:58:19.744225 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 02:58:19.744235 | orchestrator | Saturday 28 March 2026 02:58:10 +0000 (0:00:00.751) 0:00:15.413 ******** 2026-03-28 02:58:19.744245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 02:58:19.744255 | orchestrator | 2026-03-28 02:58:19.744265 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 02:58:19.744275 | orchestrator | Saturday 28 March 2026 02:58:11 +0000 (0:00:01.111) 0:00:16.525 ******** 2026-03-28 02:58:19.744310 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744320 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744330 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744340 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744350 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744360 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744370 | orchestrator | 2026-03-28 02:58:19.744379 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 02:58:19.744390 | orchestrator | Saturday 28 March 2026 02:58:11 +0000 (0:00:00.638) 0:00:17.164 ******** 2026-03-28 02:58:19.744400 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744420 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744435 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744451 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744467 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744482 | orchestrator | 2026-03-28 02:58:19.744495 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 02:58:19.744506 | orchestrator | Saturday 28 March 2026 02:58:13 +0000 (0:00:01.213) 0:00:18.377 ******** 2026-03-28 02:58:19.744517 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744528 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744539 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744561 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744585 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744597 | orchestrator | 2026-03-28 02:58:19.744608 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 02:58:19.744619 | orchestrator | Saturday 28 March 2026 02:58:13 +0000 (0:00:00.635) 0:00:19.013 ******** 2026-03-28 02:58:19.744630 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744641 | orchestrator | 2026-03-28 02:58:19.744653 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 02:58:19.744665 | orchestrator | Saturday 28 March 2026 02:58:13 +0000 (0:00:00.142) 0:00:19.155 ******** 2026-03-28 02:58:19.744675 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744685 | orchestrator | 2026-03-28 02:58:19.744695 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 02:58:19.744704 | orchestrator | Saturday 28 March 2026 02:58:14 +0000 (0:00:00.237) 0:00:19.392 ******** 2026-03-28 02:58:19.744714 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744724 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744733 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744743 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744752 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744762 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744772 | orchestrator | 2026-03-28 02:58:19.744800 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 02:58:19.744811 | orchestrator | Saturday 28 March 2026 02:58:15 +0000 (0:00:00.815) 0:00:20.208 ******** 2026-03-28 02:58:19.744821 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744830 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744840 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744849 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744878 | orchestrator | 2026-03-28 02:58:19.744888 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 02:58:19.744897 | orchestrator | Saturday 28 March 2026 02:58:15 +0000 (0:00:00.604) 0:00:20.812 ******** 2026-03-28 02:58:19.744907 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.744916 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.744926 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.744945 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.744955 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.744964 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.744974 | orchestrator | 2026-03-28 02:58:19.744984 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 02:58:19.744994 | orchestrator | Saturday 28 March 2026 02:58:16 +0000 (0:00:00.847) 0:00:21.660 ******** 2026-03-28 02:58:19.745003 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.745041 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.745057 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.745074 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.745091 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.745108 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.745118 | orchestrator | 2026-03-28 02:58:19.745128 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 02:58:19.745138 | orchestrator | Saturday 28 March 2026 02:58:17 +0000 (0:00:00.628) 0:00:22.288 ******** 2026-03-28 02:58:19.745147 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.745156 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.745166 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.745175 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.745184 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.745194 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.745203 | orchestrator | 2026-03-28 02:58:19.745213 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 02:58:19.745222 | orchestrator | Saturday 28 March 2026 02:58:17 +0000 (0:00:00.868) 0:00:23.156 ******** 2026-03-28 02:58:19.745231 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.745241 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.745250 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.745259 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.745268 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.745278 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.745287 | orchestrator | 2026-03-28 02:58:19.745297 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 02:58:19.745307 | orchestrator | Saturday 28 March 2026 02:58:18 +0000 (0:00:00.650) 0:00:23.807 ******** 2026-03-28 02:58:19.745317 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.745326 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:19.745335 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:19.745345 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:19.745354 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:19.745364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:19.745373 | orchestrator | 2026-03-28 02:58:19.745383 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 02:58:19.745392 | orchestrator | Saturday 28 March 2026 02:58:19 +0000 (0:00:00.871) 0:00:24.678 ******** 2026-03-28 02:58:19.745403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.745423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.745450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.746554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.746577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.746598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.904637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.904721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.904728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904763 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:19.904781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.904818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.904833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.904844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.978747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.978765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.978811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:19.978870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:19.978887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349520 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:20.349539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.349731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.349768 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:20.349789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.601555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:20.601566 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:20.601576 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:20.601585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:20.601642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:21.119845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:21.119951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 02:58:21.120067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:21.120092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 02:58:21.120107 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:21.120120 | orchestrator | 2026-03-28 02:58:21.120133 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 02:58:21.120146 | orchestrator | Saturday 28 March 2026 02:58:20 +0000 (0:00:01.067) 0:00:25.746 ******** 2026-03-28 02:58:21.120199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.120379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.193455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.351942 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:21.351981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.352065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477697 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.477781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:21.477803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.617992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.618137 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.618153 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.618175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760181 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760259 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760268 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760287 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760320 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760331 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:21.760350 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760362 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.760372 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996214 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996313 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996340 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996374 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996395 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996405 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:21.996414 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:21.996422 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996430 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996438 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996446 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:21.996460 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873521 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873633 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873647 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873679 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873719 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 02:58:28.873733 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:28.873744 | orchestrator | 2026-03-28 02:58:28.873756 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 02:58:28.873767 | orchestrator | Saturday 28 March 2026 02:58:21 +0000 (0:00:01.402) 0:00:27.149 ******** 2026-03-28 02:58:28.873777 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:28.873787 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:28.873797 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:28.873806 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:28.873816 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:28.873826 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:28.873835 | orchestrator | 2026-03-28 02:58:28.873845 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 02:58:28.873855 | orchestrator | Saturday 28 March 2026 02:58:22 +0000 (0:00:00.964) 0:00:28.113 ******** 2026-03-28 02:58:28.873864 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:28.873874 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:28.873884 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:28.873894 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:28.873903 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:28.873913 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:28.873922 | orchestrator | 2026-03-28 02:58:28.873932 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 02:58:28.873942 | orchestrator | Saturday 28 March 2026 02:58:23 +0000 (0:00:00.829) 0:00:28.942 ******** 2026-03-28 02:58:28.873952 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:28.873961 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:28.873971 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:28.873981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:28.873990 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:28.874000 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:28.874010 | orchestrator | 2026-03-28 02:58:28.874145 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 02:58:28.874166 | orchestrator | Saturday 28 March 2026 02:58:24 +0000 (0:00:00.683) 0:00:29.625 ******** 2026-03-28 02:58:28.874182 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:28.874199 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:28.874215 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:28.874233 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:28.874251 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:28.874267 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:28.874285 | orchestrator | 2026-03-28 02:58:28.874302 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 02:58:28.874319 | orchestrator | Saturday 28 March 2026 02:58:25 +0000 (0:00:00.897) 0:00:30.522 ******** 2026-03-28 02:58:28.874336 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:28.874352 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:28.874369 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:28.874414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:28.874435 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:28.874452 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:28.874468 | orchestrator | 2026-03-28 02:58:28.874483 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 02:58:28.874497 | orchestrator | Saturday 28 March 2026 02:58:26 +0000 (0:00:00.647) 0:00:31.170 ******** 2026-03-28 02:58:28.874506 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:28.874516 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:28.874525 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:28.874534 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:28.874544 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:28.874553 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:28.874563 | orchestrator | 2026-03-28 02:58:28.874573 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 02:58:28.874582 | orchestrator | Saturday 28 March 2026 02:58:27 +0000 (0:00:00.995) 0:00:32.166 ******** 2026-03-28 02:58:28.874592 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 02:58:28.874602 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 02:58:28.874611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 02:58:28.874621 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 02:58:28.874630 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 02:58:28.874640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 02:58:28.874649 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 02:58:28.874659 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 02:58:28.874668 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 02:58:28.874677 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 02:58:28.874687 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 02:58:28.874696 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 02:58:28.874706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 02:58:28.874716 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 02:58:28.874736 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 02:58:44.161692 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 02:58:44.161793 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 02:58:44.161820 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 02:58:44.161830 | orchestrator | 2026-03-28 02:58:44.161840 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 02:58:44.161850 | orchestrator | Saturday 28 March 2026 02:58:28 +0000 (0:00:01.852) 0:00:34.018 ******** 2026-03-28 02:58:44.161859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 02:58:44.161868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 02:58:44.161876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 02:58:44.161885 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.161893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 02:58:44.161901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 02:58:44.161909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 02:58:44.161917 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:44.161925 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 02:58:44.161933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 02:58:44.161941 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 02:58:44.161949 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:44.161957 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 02:58:44.161966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 02:58:44.161995 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 02:58:44.162004 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:44.162012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 02:58:44.162126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 02:58:44.162135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 02:58:44.162143 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:44.162151 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 02:58:44.162159 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 02:58:44.162167 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 02:58:44.162175 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:44.162183 | orchestrator | 2026-03-28 02:58:44.162191 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 02:58:44.162200 | orchestrator | Saturday 28 March 2026 02:58:30 +0000 (0:00:01.255) 0:00:35.274 ******** 2026-03-28 02:58:44.162210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:44.162219 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:44.162228 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:44.162237 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 02:58:44.162247 | orchestrator | 2026-03-28 02:58:44.162256 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 02:58:44.162266 | orchestrator | Saturday 28 March 2026 02:58:31 +0000 (0:00:01.131) 0:00:36.405 ******** 2026-03-28 02:58:44.162275 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162285 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:44.162294 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:44.162302 | orchestrator | 2026-03-28 02:58:44.162312 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 02:58:44.162321 | orchestrator | Saturday 28 March 2026 02:58:31 +0000 (0:00:00.353) 0:00:36.759 ******** 2026-03-28 02:58:44.162330 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162339 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:44.162348 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:44.162356 | orchestrator | 2026-03-28 02:58:44.162365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 02:58:44.162375 | orchestrator | Saturday 28 March 2026 02:58:31 +0000 (0:00:00.344) 0:00:37.103 ******** 2026-03-28 02:58:44.162383 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162392 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:44.162401 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:44.162410 | orchestrator | 2026-03-28 02:58:44.162419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 02:58:44.162428 | orchestrator | Saturday 28 March 2026 02:58:32 +0000 (0:00:00.519) 0:00:37.623 ******** 2026-03-28 02:58:44.162437 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:44.162451 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:44.162464 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:44.162476 | orchestrator | 2026-03-28 02:58:44.162489 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 02:58:44.162502 | orchestrator | Saturday 28 March 2026 02:58:32 +0000 (0:00:00.447) 0:00:38.070 ******** 2026-03-28 02:58:44.162515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 02:58:44.162528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 02:58:44.162541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 02:58:44.162554 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162565 | orchestrator | 2026-03-28 02:58:44.162577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 02:58:44.162604 | orchestrator | Saturday 28 March 2026 02:58:33 +0000 (0:00:00.417) 0:00:38.488 ******** 2026-03-28 02:58:44.162618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 02:58:44.162631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 02:58:44.162645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 02:58:44.162659 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162672 | orchestrator | 2026-03-28 02:58:44.162706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 02:58:44.162721 | orchestrator | Saturday 28 March 2026 02:58:33 +0000 (0:00:00.418) 0:00:38.907 ******** 2026-03-28 02:58:44.162737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 02:58:44.162745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 02:58:44.162753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 02:58:44.162761 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.162769 | orchestrator | 2026-03-28 02:58:44.162777 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 02:58:44.162785 | orchestrator | Saturday 28 March 2026 02:58:34 +0000 (0:00:00.411) 0:00:39.318 ******** 2026-03-28 02:58:44.162793 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:44.162801 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:44.162809 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:44.162816 | orchestrator | 2026-03-28 02:58:44.162824 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 02:58:44.162832 | orchestrator | Saturday 28 March 2026 02:58:34 +0000 (0:00:00.360) 0:00:39.679 ******** 2026-03-28 02:58:44.162840 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 02:58:44.162848 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 02:58:44.162856 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 02:58:44.162864 | orchestrator | 2026-03-28 02:58:44.162872 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 02:58:44.162880 | orchestrator | Saturday 28 March 2026 02:58:35 +0000 (0:00:01.086) 0:00:40.765 ******** 2026-03-28 02:58:44.162888 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 02:58:44.162897 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 02:58:44.162904 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 02:58:44.162912 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 02:58:44.162920 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 02:58:44.162928 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 02:58:44.162936 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 02:58:44.162944 | orchestrator | 2026-03-28 02:58:44.162952 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 02:58:44.162960 | orchestrator | Saturday 28 March 2026 02:58:36 +0000 (0:00:01.121) 0:00:41.887 ******** 2026-03-28 02:58:44.162967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 02:58:44.162975 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 02:58:44.162983 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 02:58:44.162991 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 02:58:44.162999 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 02:58:44.163007 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 02:58:44.163015 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 02:58:44.163058 | orchestrator | 2026-03-28 02:58:44.163067 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 02:58:44.163083 | orchestrator | Saturday 28 March 2026 02:58:38 +0000 (0:00:02.061) 0:00:43.949 ******** 2026-03-28 02:58:44.163092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:58:44.163102 | orchestrator | 2026-03-28 02:58:44.163111 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 02:58:44.163120 | orchestrator | Saturday 28 March 2026 02:58:40 +0000 (0:00:01.265) 0:00:45.214 ******** 2026-03-28 02:58:44.163128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:58:44.163136 | orchestrator | 2026-03-28 02:58:44.163144 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 02:58:44.163152 | orchestrator | Saturday 28 March 2026 02:58:41 +0000 (0:00:01.279) 0:00:46.494 ******** 2026-03-28 02:58:44.163160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:58:44.163168 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:58:44.163177 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:58:44.163191 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:58:44.163205 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:58:44.163218 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:58:44.163232 | orchestrator | 2026-03-28 02:58:44.163245 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 02:58:44.163258 | orchestrator | Saturday 28 March 2026 02:58:42 +0000 (0:00:01.298) 0:00:47.792 ******** 2026-03-28 02:58:44.163271 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:58:44.163282 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:44.163294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:58:44.163307 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:44.163320 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:58:44.163333 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:58:44.163347 | orchestrator | 2026-03-28 02:58:44.163361 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 02:58:44.163375 | orchestrator | Saturday 28 March 2026 02:58:43 +0000 (0:00:00.736) 0:00:48.529 ******** 2026-03-28 02:58:44.163389 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:58:44.163403 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:58:44.163422 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.019714 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.019819 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.019833 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.019878 | orchestrator | 2026-03-28 02:59:07.019890 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 02:59:07.019901 | orchestrator | Saturday 28 March 2026 02:58:44 +0000 (0:00:01.025) 0:00:49.554 ******** 2026-03-28 02:59:07.019909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.019917 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.019924 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.019932 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.019940 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.019948 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.019956 | orchestrator | 2026-03-28 02:59:07.019963 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 02:59:07.019971 | orchestrator | Saturday 28 March 2026 02:58:45 +0000 (0:00:00.805) 0:00:50.360 ******** 2026-03-28 02:59:07.019979 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.019987 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.019994 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020002 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020010 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020018 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020063 | orchestrator | 2026-03-28 02:59:07.020070 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 02:59:07.020103 | orchestrator | Saturday 28 March 2026 02:58:46 +0000 (0:00:01.235) 0:00:51.595 ******** 2026-03-28 02:59:07.020108 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020112 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020117 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020122 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020126 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020131 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020135 | orchestrator | 2026-03-28 02:59:07.020141 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 02:59:07.020145 | orchestrator | Saturday 28 March 2026 02:58:47 +0000 (0:00:00.640) 0:00:52.236 ******** 2026-03-28 02:59:07.020150 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020154 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020159 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020164 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020173 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020177 | orchestrator | 2026-03-28 02:59:07.020182 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 02:59:07.020190 | orchestrator | Saturday 28 March 2026 02:58:47 +0000 (0:00:00.856) 0:00:53.093 ******** 2026-03-28 02:59:07.020197 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020205 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020212 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020219 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020226 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020233 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020240 | orchestrator | 2026-03-28 02:59:07.020247 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 02:59:07.020254 | orchestrator | Saturday 28 March 2026 02:58:48 +0000 (0:00:01.033) 0:00:54.126 ******** 2026-03-28 02:59:07.020261 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020270 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020278 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020285 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020293 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020301 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020309 | orchestrator | 2026-03-28 02:59:07.020316 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 02:59:07.020324 | orchestrator | Saturday 28 March 2026 02:58:50 +0000 (0:00:01.368) 0:00:55.495 ******** 2026-03-28 02:59:07.020332 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020340 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020345 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020351 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020355 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020360 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020364 | orchestrator | 2026-03-28 02:59:07.020369 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 02:59:07.020373 | orchestrator | Saturday 28 March 2026 02:58:51 +0000 (0:00:00.684) 0:00:56.180 ******** 2026-03-28 02:59:07.020378 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020382 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020387 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020391 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020396 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020400 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020405 | orchestrator | 2026-03-28 02:59:07.020409 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 02:59:07.020414 | orchestrator | Saturday 28 March 2026 02:58:51 +0000 (0:00:00.929) 0:00:57.110 ******** 2026-03-28 02:59:07.020418 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020422 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020432 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020437 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020442 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020446 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020451 | orchestrator | 2026-03-28 02:59:07.020455 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 02:59:07.020460 | orchestrator | Saturday 28 March 2026 02:58:52 +0000 (0:00:00.652) 0:00:57.762 ******** 2026-03-28 02:59:07.020464 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020469 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020473 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020487 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020491 | orchestrator | 2026-03-28 02:59:07.020496 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 02:59:07.020501 | orchestrator | Saturday 28 March 2026 02:58:53 +0000 (0:00:00.876) 0:00:58.638 ******** 2026-03-28 02:59:07.020505 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020510 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020531 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020536 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020556 | orchestrator | 2026-03-28 02:59:07.020561 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 02:59:07.020565 | orchestrator | Saturday 28 March 2026 02:58:54 +0000 (0:00:00.630) 0:00:59.269 ******** 2026-03-28 02:59:07.020570 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020579 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020588 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020592 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020597 | orchestrator | 2026-03-28 02:59:07.020602 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 02:59:07.020606 | orchestrator | Saturday 28 March 2026 02:58:54 +0000 (0:00:00.854) 0:01:00.123 ******** 2026-03-28 02:59:07.020611 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020615 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020622 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020629 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020636 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020644 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020651 | orchestrator | 2026-03-28 02:59:07.020658 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 02:59:07.020665 | orchestrator | Saturday 28 March 2026 02:58:55 +0000 (0:00:00.591) 0:01:00.715 ******** 2026-03-28 02:59:07.020672 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020678 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020685 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020692 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020699 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020706 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020712 | orchestrator | 2026-03-28 02:59:07.020719 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 02:59:07.020725 | orchestrator | Saturday 28 March 2026 02:58:56 +0000 (0:00:01.061) 0:01:01.777 ******** 2026-03-28 02:59:07.020732 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020739 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020746 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020752 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020760 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020767 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020782 | orchestrator | 2026-03-28 02:59:07.020790 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 02:59:07.020798 | orchestrator | Saturday 28 March 2026 02:58:57 +0000 (0:00:00.864) 0:01:02.642 ******** 2026-03-28 02:59:07.020803 | orchestrator | ok: [testbed-node-3] 2026-03-28 02:59:07.020807 | orchestrator | ok: [testbed-node-4] 2026-03-28 02:59:07.020812 | orchestrator | ok: [testbed-node-5] 2026-03-28 02:59:07.020816 | orchestrator | ok: [testbed-node-0] 2026-03-28 02:59:07.020821 | orchestrator | ok: [testbed-node-1] 2026-03-28 02:59:07.020825 | orchestrator | ok: [testbed-node-2] 2026-03-28 02:59:07.020830 | orchestrator | 2026-03-28 02:59:07.020834 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 02:59:07.020839 | orchestrator | Saturday 28 March 2026 02:58:58 +0000 (0:00:01.352) 0:01:03.994 ******** 2026-03-28 02:59:07.020844 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:59:07.020848 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:59:07.020853 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:59:07.020857 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:59:07.020862 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:59:07.020866 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:59:07.020871 | orchestrator | 2026-03-28 02:59:07.020875 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 02:59:07.020880 | orchestrator | Saturday 28 March 2026 02:59:00 +0000 (0:00:01.581) 0:01:05.576 ******** 2026-03-28 02:59:07.020884 | orchestrator | changed: [testbed-node-4] 2026-03-28 02:59:07.020889 | orchestrator | changed: [testbed-node-5] 2026-03-28 02:59:07.020893 | orchestrator | changed: [testbed-node-3] 2026-03-28 02:59:07.020899 | orchestrator | changed: [testbed-node-1] 2026-03-28 02:59:07.020904 | orchestrator | changed: [testbed-node-0] 2026-03-28 02:59:07.020910 | orchestrator | changed: [testbed-node-2] 2026-03-28 02:59:07.020915 | orchestrator | 2026-03-28 02:59:07.020921 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 02:59:07.020927 | orchestrator | Saturday 28 March 2026 02:59:02 +0000 (0:00:02.327) 0:01:07.904 ******** 2026-03-28 02:59:07.020933 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 02:59:07.020941 | orchestrator | 2026-03-28 02:59:07.020946 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 02:59:07.020952 | orchestrator | Saturday 28 March 2026 02:59:04 +0000 (0:00:01.303) 0:01:09.207 ******** 2026-03-28 02:59:07.020958 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.020963 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.020968 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.020974 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.020979 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.020985 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.020990 | orchestrator | 2026-03-28 02:59:07.020995 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 02:59:07.021001 | orchestrator | Saturday 28 March 2026 02:59:04 +0000 (0:00:00.662) 0:01:09.870 ******** 2026-03-28 02:59:07.021006 | orchestrator | skipping: [testbed-node-3] 2026-03-28 02:59:07.021011 | orchestrator | skipping: [testbed-node-4] 2026-03-28 02:59:07.021017 | orchestrator | skipping: [testbed-node-5] 2026-03-28 02:59:07.021022 | orchestrator | skipping: [testbed-node-0] 2026-03-28 02:59:07.021057 | orchestrator | skipping: [testbed-node-1] 2026-03-28 02:59:07.021063 | orchestrator | skipping: [testbed-node-2] 2026-03-28 02:59:07.021068 | orchestrator | 2026-03-28 02:59:07.021073 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 02:59:07.021079 | orchestrator | Saturday 28 March 2026 02:59:05 +0000 (0:00:00.863) 0:01:10.733 ******** 2026-03-28 02:59:07.021091 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851693 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851814 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851828 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851838 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851847 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851856 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851865 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851873 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 03:00:23.851883 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851892 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851900 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 03:00:23.851909 | orchestrator | 2026-03-28 03:00:23.851919 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 03:00:23.851928 | orchestrator | Saturday 28 March 2026 02:59:07 +0000 (0:00:01.435) 0:01:12.169 ******** 2026-03-28 03:00:23.851937 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:00:23.851946 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:00:23.851955 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:00:23.851963 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:00:23.851972 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:00:23.851980 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:00:23.851989 | orchestrator | 2026-03-28 03:00:23.851997 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 03:00:23.852006 | orchestrator | Saturday 28 March 2026 02:59:08 +0000 (0:00:01.229) 0:01:13.398 ******** 2026-03-28 03:00:23.852015 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852023 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852056 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852069 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852078 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852087 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852095 | orchestrator | 2026-03-28 03:00:23.852104 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 03:00:23.852112 | orchestrator | Saturday 28 March 2026 02:59:08 +0000 (0:00:00.749) 0:01:14.147 ******** 2026-03-28 03:00:23.852121 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852129 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852138 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852146 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852155 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852163 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852172 | orchestrator | 2026-03-28 03:00:23.852181 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 03:00:23.852190 | orchestrator | Saturday 28 March 2026 02:59:09 +0000 (0:00:00.897) 0:01:15.045 ******** 2026-03-28 03:00:23.852198 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852209 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852219 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852229 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852238 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852248 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852259 | orchestrator | 2026-03-28 03:00:23.852269 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 03:00:23.852279 | orchestrator | Saturday 28 March 2026 02:59:10 +0000 (0:00:00.654) 0:01:15.700 ******** 2026-03-28 03:00:23.852299 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:00:23.852310 | orchestrator | 2026-03-28 03:00:23.852321 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 03:00:23.852331 | orchestrator | Saturday 28 March 2026 02:59:11 +0000 (0:00:01.380) 0:01:17.080 ******** 2026-03-28 03:00:23.852340 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:23.852351 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:00:23.852362 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:23.852371 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:23.852381 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:00:23.852391 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:00:23.852401 | orchestrator | 2026-03-28 03:00:23.852411 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 03:00:23.852421 | orchestrator | Saturday 28 March 2026 03:00:13 +0000 (0:01:01.499) 0:02:18.580 ******** 2026-03-28 03:00:23.852431 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852441 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852451 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852461 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852471 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852481 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852491 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852501 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852512 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852536 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852554 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852565 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852575 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852585 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852595 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852612 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852621 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852629 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852637 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852646 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 03:00:23.852654 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 03:00:23.852663 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 03:00:23.852671 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852680 | orchestrator | 2026-03-28 03:00:23.852688 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 03:00:23.852697 | orchestrator | Saturday 28 March 2026 03:00:14 +0000 (0:00:00.710) 0:02:19.290 ******** 2026-03-28 03:00:23.852705 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852714 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852722 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852731 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852754 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852763 | orchestrator | 2026-03-28 03:00:23.852772 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 03:00:23.852780 | orchestrator | Saturday 28 March 2026 03:00:14 +0000 (0:00:00.872) 0:02:20.163 ******** 2026-03-28 03:00:23.852836 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852846 | orchestrator | 2026-03-28 03:00:23.852866 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 03:00:23.852884 | orchestrator | Saturday 28 March 2026 03:00:15 +0000 (0:00:00.171) 0:02:20.334 ******** 2026-03-28 03:00:23.852903 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852921 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.852929 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.852938 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.852946 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.852955 | orchestrator | 2026-03-28 03:00:23.852964 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 03:00:23.852972 | orchestrator | Saturday 28 March 2026 03:00:15 +0000 (0:00:00.629) 0:02:20.964 ******** 2026-03-28 03:00:23.852981 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.852990 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.852998 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.853007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.853015 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.853024 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.853059 | orchestrator | 2026-03-28 03:00:23.853069 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 03:00:23.853078 | orchestrator | Saturday 28 March 2026 03:00:16 +0000 (0:00:00.913) 0:02:21.877 ******** 2026-03-28 03:00:23.853086 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.853095 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.853104 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:23.853112 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:23.853121 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:23.853130 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:23.853138 | orchestrator | 2026-03-28 03:00:23.853147 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 03:00:23.853156 | orchestrator | Saturday 28 March 2026 03:00:17 +0000 (0:00:00.686) 0:02:22.564 ******** 2026-03-28 03:00:23.853165 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:23.853173 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:23.853182 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:00:23.853190 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:00:23.853199 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:23.853208 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:00:23.853216 | orchestrator | 2026-03-28 03:00:23.853225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 03:00:23.853234 | orchestrator | Saturday 28 March 2026 03:00:21 +0000 (0:00:03.802) 0:02:26.366 ******** 2026-03-28 03:00:23.853242 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:23.853251 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:23.853260 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:23.853268 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:00:23.853277 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:00:23.853285 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:00:23.853294 | orchestrator | 2026-03-28 03:00:23.853302 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 03:00:23.853311 | orchestrator | Saturday 28 March 2026 03:00:21 +0000 (0:00:00.657) 0:02:27.024 ******** 2026-03-28 03:00:23.853322 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:00:23.853332 | orchestrator | 2026-03-28 03:00:23.853341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 03:00:23.853357 | orchestrator | Saturday 28 March 2026 03:00:23 +0000 (0:00:01.354) 0:02:28.379 ******** 2026-03-28 03:00:23.853366 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:23.853375 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:23.853391 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.013868 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014139 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014159 | orchestrator | 2026-03-28 03:00:39.014181 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 03:00:39.014202 | orchestrator | Saturday 28 March 2026 03:00:24 +0000 (0:00:00.891) 0:02:29.271 ******** 2026-03-28 03:00:39.014218 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014230 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014241 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014264 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014275 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014286 | orchestrator | 2026-03-28 03:00:39.014298 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 03:00:39.014309 | orchestrator | Saturday 28 March 2026 03:00:24 +0000 (0:00:00.689) 0:02:29.960 ******** 2026-03-28 03:00:39.014320 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014331 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014365 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014389 | orchestrator | 2026-03-28 03:00:39.014402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 03:00:39.014416 | orchestrator | Saturday 28 March 2026 03:00:25 +0000 (0:00:00.942) 0:02:30.903 ******** 2026-03-28 03:00:39.014429 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014442 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014455 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014468 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014508 | orchestrator | 2026-03-28 03:00:39.014521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 03:00:39.014534 | orchestrator | Saturday 28 March 2026 03:00:26 +0000 (0:00:00.652) 0:02:31.555 ******** 2026-03-28 03:00:39.014547 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014561 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014574 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014587 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014601 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014614 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014627 | orchestrator | 2026-03-28 03:00:39.014640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 03:00:39.014653 | orchestrator | Saturday 28 March 2026 03:00:27 +0000 (0:00:00.951) 0:02:32.507 ******** 2026-03-28 03:00:39.014666 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014679 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014692 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014718 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014731 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014745 | orchestrator | 2026-03-28 03:00:39.014757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 03:00:39.014768 | orchestrator | Saturday 28 March 2026 03:00:28 +0000 (0:00:00.694) 0:02:33.201 ******** 2026-03-28 03:00:39.014804 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014816 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014827 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014839 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014850 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014861 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014872 | orchestrator | 2026-03-28 03:00:39.014883 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 03:00:39.014895 | orchestrator | Saturday 28 March 2026 03:00:28 +0000 (0:00:00.908) 0:02:34.110 ******** 2026-03-28 03:00:39.014906 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:39.014917 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:39.014928 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:39.014939 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:39.014950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:39.014961 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:39.014972 | orchestrator | 2026-03-28 03:00:39.014983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 03:00:39.014994 | orchestrator | Saturday 28 March 2026 03:00:29 +0000 (0:00:00.845) 0:02:34.956 ******** 2026-03-28 03:00:39.015005 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:39.015017 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:39.015028 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:39.015100 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:00:39.015114 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:00:39.015126 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:00:39.015137 | orchestrator | 2026-03-28 03:00:39.015147 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 03:00:39.015159 | orchestrator | Saturday 28 March 2026 03:00:31 +0000 (0:00:01.338) 0:02:36.294 ******** 2026-03-28 03:00:39.015171 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:00:39.015184 | orchestrator | 2026-03-28 03:00:39.015195 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 03:00:39.015206 | orchestrator | Saturday 28 March 2026 03:00:32 +0000 (0:00:01.279) 0:02:37.574 ******** 2026-03-28 03:00:39.015217 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 03:00:39.015229 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 03:00:39.015239 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 03:00:39.015250 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 03:00:39.015262 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015273 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 03:00:39.015303 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015322 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 03:00:39.015334 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015345 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015378 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015389 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 03:00:39.015400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015411 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015422 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015444 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015465 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015476 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 03:00:39.015487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015498 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015508 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015519 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015530 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015541 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 03:00:39.015552 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015574 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015596 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 03:00:39.015617 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015639 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015650 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015661 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 03:00:39.015683 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015726 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015737 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015748 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 03:00:39.015759 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015770 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015792 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015803 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015814 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 03:00:39.015824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015835 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015857 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015868 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:39.015879 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 03:00:39.015890 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:39.015901 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015912 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015930 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:39.015941 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:39.015952 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:39.015963 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 03:00:39.015974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:39.015991 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:53.245721 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:53.245824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245840 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245853 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 03:00:53.245867 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:53.245881 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:53.245895 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245908 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.245923 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.245932 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 03:00:53.245939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245955 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.245964 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 03:00:53.245972 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 03:00:53.245980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 03:00:53.245988 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.245996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.246004 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 03:00:53.246084 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 03:00:53.246095 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 03:00:53.246103 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 03:00:53.246111 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 03:00:53.246119 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 03:00:53.246127 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 03:00:53.246135 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 03:00:53.246143 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 03:00:53.246150 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 03:00:53.246158 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 03:00:53.246166 | orchestrator | 2026-03-28 03:00:53.246175 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 03:00:53.246183 | orchestrator | Saturday 28 March 2026 03:00:38 +0000 (0:00:06.558) 0:02:44.132 ******** 2026-03-28 03:00:53.246191 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246206 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246215 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:00:53.246245 | orchestrator | 2026-03-28 03:00:53.246254 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 03:00:53.246264 | orchestrator | Saturday 28 March 2026 03:00:40 +0000 (0:00:01.146) 0:02:45.279 ******** 2026-03-28 03:00:53.246273 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246283 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246293 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246302 | orchestrator | 2026-03-28 03:00:53.246311 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 03:00:53.246321 | orchestrator | Saturday 28 March 2026 03:00:40 +0000 (0:00:00.721) 0:02:46.000 ******** 2026-03-28 03:00:53.246330 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246340 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246349 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 03:00:53.246358 | orchestrator | 2026-03-28 03:00:53.246367 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 03:00:53.246377 | orchestrator | Saturday 28 March 2026 03:00:42 +0000 (0:00:01.172) 0:02:47.173 ******** 2026-03-28 03:00:53.246386 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:53.246395 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:53.246404 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:53.246414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246423 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246432 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246441 | orchestrator | 2026-03-28 03:00:53.246450 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 03:00:53.246479 | orchestrator | Saturday 28 March 2026 03:00:42 +0000 (0:00:00.866) 0:02:48.040 ******** 2026-03-28 03:00:53.246489 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:53.246499 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:53.246508 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:53.246517 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246526 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246535 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246543 | orchestrator | 2026-03-28 03:00:53.246552 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 03:00:53.246561 | orchestrator | Saturday 28 March 2026 03:00:43 +0000 (0:00:00.627) 0:02:48.668 ******** 2026-03-28 03:00:53.246570 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246579 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246587 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246597 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246615 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246625 | orchestrator | 2026-03-28 03:00:53.246634 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 03:00:53.246643 | orchestrator | Saturday 28 March 2026 03:00:44 +0000 (0:00:00.914) 0:02:49.582 ******** 2026-03-28 03:00:53.246652 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246667 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246675 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246683 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246704 | orchestrator | 2026-03-28 03:00:53.246713 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 03:00:53.246720 | orchestrator | Saturday 28 March 2026 03:00:45 +0000 (0:00:00.614) 0:02:50.197 ******** 2026-03-28 03:00:53.246728 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246736 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246751 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246759 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246767 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246775 | orchestrator | 2026-03-28 03:00:53.246782 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 03:00:53.246791 | orchestrator | Saturday 28 March 2026 03:00:46 +0000 (0:00:01.005) 0:02:51.203 ******** 2026-03-28 03:00:53.246798 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246806 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246814 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246821 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246829 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246845 | orchestrator | 2026-03-28 03:00:53.246853 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 03:00:53.246860 | orchestrator | Saturday 28 March 2026 03:00:46 +0000 (0:00:00.661) 0:02:51.865 ******** 2026-03-28 03:00:53.246868 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246876 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246884 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246892 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246899 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246907 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246915 | orchestrator | 2026-03-28 03:00:53.246923 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 03:00:53.246931 | orchestrator | Saturday 28 March 2026 03:00:47 +0000 (0:00:00.894) 0:02:52.759 ******** 2026-03-28 03:00:53.246938 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.246946 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.246954 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:00:53.246962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.246969 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.246977 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.246985 | orchestrator | 2026-03-28 03:00:53.246993 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 03:00:53.247001 | orchestrator | Saturday 28 March 2026 03:00:48 +0000 (0:00:00.628) 0:02:53.388 ******** 2026-03-28 03:00:53.247008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.247016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.247024 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.247032 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:53.247039 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:53.247071 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:53.247079 | orchestrator | 2026-03-28 03:00:53.247087 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 03:00:53.247095 | orchestrator | Saturday 28 March 2026 03:00:51 +0000 (0:00:02.989) 0:02:56.377 ******** 2026-03-28 03:00:53.247103 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:53.247111 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:53.247118 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:53.247126 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.247134 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.247141 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.247149 | orchestrator | 2026-03-28 03:00:53.247157 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 03:00:53.247170 | orchestrator | Saturday 28 March 2026 03:00:51 +0000 (0:00:00.665) 0:02:57.043 ******** 2026-03-28 03:00:53.247178 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:00:53.247186 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:00:53.247193 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:00:53.247201 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:00:53.247209 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:00:53.247217 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:00:53.247224 | orchestrator | 2026-03-28 03:00:53.247232 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 03:00:53.247240 | orchestrator | Saturday 28 March 2026 03:00:52 +0000 (0:00:00.960) 0:02:58.003 ******** 2026-03-28 03:00:53.247248 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:00:53.247255 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:00:53.247272 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.573879 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.573975 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.573988 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.573998 | orchestrator | 2026-03-28 03:01:07.574008 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 03:01:07.574090 | orchestrator | Saturday 28 March 2026 03:00:53 +0000 (0:00:00.875) 0:02:58.879 ******** 2026-03-28 03:01:07.574108 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 03:01:07.574119 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 03:01:07.574128 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 03:01:07.574136 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574145 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574153 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574161 | orchestrator | 2026-03-28 03:01:07.574169 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 03:01:07.574178 | orchestrator | Saturday 28 March 2026 03:00:54 +0000 (0:00:00.665) 0:02:59.545 ******** 2026-03-28 03:01:07.574187 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-28 03:01:07.574199 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-28 03:01:07.574209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574217 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-28 03:01:07.574230 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-28 03:01:07.574244 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-28 03:01:07.574289 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-28 03:01:07.574305 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574318 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574344 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574369 | orchestrator | 2026-03-28 03:01:07.574382 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 03:01:07.574394 | orchestrator | Saturday 28 March 2026 03:00:55 +0000 (0:00:00.966) 0:03:00.511 ******** 2026-03-28 03:01:07.574408 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574421 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574434 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574448 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574462 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574476 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574489 | orchestrator | 2026-03-28 03:01:07.574502 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 03:01:07.574516 | orchestrator | Saturday 28 March 2026 03:00:56 +0000 (0:00:00.656) 0:03:01.168 ******** 2026-03-28 03:01:07.574529 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574542 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574556 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574570 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574597 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574611 | orchestrator | 2026-03-28 03:01:07.574625 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 03:01:07.574656 | orchestrator | Saturday 28 March 2026 03:00:56 +0000 (0:00:00.862) 0:03:02.030 ******** 2026-03-28 03:01:07.574691 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574707 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574721 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574748 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574761 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574775 | orchestrator | 2026-03-28 03:01:07.574789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 03:01:07.574800 | orchestrator | Saturday 28 March 2026 03:00:57 +0000 (0:00:00.685) 0:03:02.716 ******** 2026-03-28 03:01:07.574810 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574818 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574826 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574842 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574850 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574858 | orchestrator | 2026-03-28 03:01:07.574866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 03:01:07.574874 | orchestrator | Saturday 28 March 2026 03:00:58 +0000 (0:00:00.908) 0:03:03.624 ******** 2026-03-28 03:01:07.574882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.574890 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:07.574898 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:07.574906 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574914 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.574922 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.574939 | orchestrator | 2026-03-28 03:01:07.574948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 03:01:07.574956 | orchestrator | Saturday 28 March 2026 03:00:59 +0000 (0:00:00.668) 0:03:04.293 ******** 2026-03-28 03:01:07.574964 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:07.574973 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:07.574981 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:07.574989 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.574997 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.575005 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.575013 | orchestrator | 2026-03-28 03:01:07.575021 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 03:01:07.575029 | orchestrator | Saturday 28 March 2026 03:01:00 +0000 (0:00:00.874) 0:03:05.168 ******** 2026-03-28 03:01:07.575037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:07.575066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:07.575075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:07.575083 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.575091 | orchestrator | 2026-03-28 03:01:07.575099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 03:01:07.575107 | orchestrator | Saturday 28 March 2026 03:01:00 +0000 (0:00:00.430) 0:03:05.598 ******** 2026-03-28 03:01:07.575115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:07.575123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:07.575131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:07.575139 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.575147 | orchestrator | 2026-03-28 03:01:07.575155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 03:01:07.575163 | orchestrator | Saturday 28 March 2026 03:01:00 +0000 (0:00:00.475) 0:03:06.073 ******** 2026-03-28 03:01:07.575171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:07.575178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:07.575186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:07.575194 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:07.575202 | orchestrator | 2026-03-28 03:01:07.575210 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 03:01:07.575218 | orchestrator | Saturday 28 March 2026 03:01:01 +0000 (0:00:00.459) 0:03:06.533 ******** 2026-03-28 03:01:07.575226 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:07.575234 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:07.575242 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:07.575250 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.575258 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.575266 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.575274 | orchestrator | 2026-03-28 03:01:07.575281 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 03:01:07.575290 | orchestrator | Saturday 28 March 2026 03:01:01 +0000 (0:00:00.628) 0:03:07.161 ******** 2026-03-28 03:01:07.575297 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 03:01:07.575305 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 03:01:07.575313 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 03:01:07.575321 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 03:01:07.575329 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:07.575337 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 03:01:07.575345 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:07.575353 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 03:01:07.575361 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:07.575369 | orchestrator | 2026-03-28 03:01:07.575377 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 03:01:07.575391 | orchestrator | Saturday 28 March 2026 03:01:03 +0000 (0:00:01.825) 0:03:08.987 ******** 2026-03-28 03:01:07.575399 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:01:07.575407 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:01:07.575415 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:01:07.575423 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:01:07.575431 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:01:07.575439 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:01:07.575447 | orchestrator | 2026-03-28 03:01:07.575455 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:01:07.575463 | orchestrator | Saturday 28 March 2026 03:01:06 +0000 (0:00:02.688) 0:03:11.675 ******** 2026-03-28 03:01:07.575471 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:01:07.575490 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:01:25.002351 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:01:25.002465 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:01:25.002484 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:01:25.002499 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:01:25.002512 | orchestrator | 2026-03-28 03:01:25.002526 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 03:01:25.002536 | orchestrator | Saturday 28 March 2026 03:01:07 +0000 (0:00:01.049) 0:03:12.724 ******** 2026-03-28 03:01:25.002543 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.002551 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:25.002558 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:25.002566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:01:25.002574 | orchestrator | 2026-03-28 03:01:25.002581 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 03:01:25.002589 | orchestrator | Saturday 28 March 2026 03:01:08 +0000 (0:00:01.109) 0:03:13.834 ******** 2026-03-28 03:01:25.002596 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:25.002604 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:25.002612 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:25.002619 | orchestrator | 2026-03-28 03:01:25.002626 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 03:01:25.002633 | orchestrator | Saturday 28 March 2026 03:01:09 +0000 (0:00:00.389) 0:03:14.224 ******** 2026-03-28 03:01:25.002640 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:01:25.002648 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:01:25.002655 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:01:25.002662 | orchestrator | 2026-03-28 03:01:25.002669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 03:01:25.002676 | orchestrator | Saturday 28 March 2026 03:01:10 +0000 (0:00:01.528) 0:03:15.752 ******** 2026-03-28 03:01:25.002684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 03:01:25.002691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 03:01:25.002698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 03:01:25.002705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:25.002712 | orchestrator | 2026-03-28 03:01:25.002720 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 03:01:25.002727 | orchestrator | Saturday 28 March 2026 03:01:11 +0000 (0:00:00.666) 0:03:16.419 ******** 2026-03-28 03:01:25.002734 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:25.002742 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:25.002749 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:25.002756 | orchestrator | 2026-03-28 03:01:25.002764 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 03:01:25.002771 | orchestrator | Saturday 28 March 2026 03:01:11 +0000 (0:00:00.351) 0:03:16.770 ******** 2026-03-28 03:01:25.002779 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:25.002786 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:25.002793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:25.002822 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:01:25.002830 | orchestrator | 2026-03-28 03:01:25.002855 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 03:01:25.002870 | orchestrator | Saturday 28 March 2026 03:01:12 +0000 (0:00:01.117) 0:03:17.888 ******** 2026-03-28 03:01:25.002878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:25.002885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:25.002892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:25.002901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.002909 | orchestrator | 2026-03-28 03:01:25.002918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 03:01:25.002926 | orchestrator | Saturday 28 March 2026 03:01:13 +0000 (0:00:00.441) 0:03:18.330 ******** 2026-03-28 03:01:25.002935 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.002943 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:25.002952 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:25.002960 | orchestrator | 2026-03-28 03:01:25.002968 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 03:01:25.002976 | orchestrator | Saturday 28 March 2026 03:01:13 +0000 (0:00:00.333) 0:03:18.663 ******** 2026-03-28 03:01:25.002988 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003008 | orchestrator | 2026-03-28 03:01:25.003023 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 03:01:25.003035 | orchestrator | Saturday 28 March 2026 03:01:13 +0000 (0:00:00.241) 0:03:18.905 ******** 2026-03-28 03:01:25.003047 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003082 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:25.003094 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:25.003107 | orchestrator | 2026-03-28 03:01:25.003119 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 03:01:25.003131 | orchestrator | Saturday 28 March 2026 03:01:14 +0000 (0:00:00.565) 0:03:19.470 ******** 2026-03-28 03:01:25.003142 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003149 | orchestrator | 2026-03-28 03:01:25.003156 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 03:01:25.003163 | orchestrator | Saturday 28 March 2026 03:01:14 +0000 (0:00:00.259) 0:03:19.730 ******** 2026-03-28 03:01:25.003170 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003178 | orchestrator | 2026-03-28 03:01:25.003185 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 03:01:25.003192 | orchestrator | Saturday 28 March 2026 03:01:14 +0000 (0:00:00.266) 0:03:19.997 ******** 2026-03-28 03:01:25.003208 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003215 | orchestrator | 2026-03-28 03:01:25.003222 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 03:01:25.003230 | orchestrator | Saturday 28 March 2026 03:01:14 +0000 (0:00:00.153) 0:03:20.150 ******** 2026-03-28 03:01:25.003251 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003259 | orchestrator | 2026-03-28 03:01:25.003282 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 03:01:25.003290 | orchestrator | Saturday 28 March 2026 03:01:15 +0000 (0:00:00.265) 0:03:20.415 ******** 2026-03-28 03:01:25.003297 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003304 | orchestrator | 2026-03-28 03:01:25.003312 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 03:01:25.003319 | orchestrator | Saturday 28 March 2026 03:01:15 +0000 (0:00:00.276) 0:03:20.692 ******** 2026-03-28 03:01:25.003326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:25.003333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:25.003340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:25.003356 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003364 | orchestrator | 2026-03-28 03:01:25.003371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 03:01:25.003378 | orchestrator | Saturday 28 March 2026 03:01:15 +0000 (0:00:00.441) 0:03:21.134 ******** 2026-03-28 03:01:25.003386 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003393 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:25.003400 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:25.003407 | orchestrator | 2026-03-28 03:01:25.003414 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 03:01:25.003422 | orchestrator | Saturday 28 March 2026 03:01:16 +0000 (0:00:00.368) 0:03:21.502 ******** 2026-03-28 03:01:25.003429 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003436 | orchestrator | 2026-03-28 03:01:25.003443 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 03:01:25.003450 | orchestrator | Saturday 28 March 2026 03:01:16 +0000 (0:00:00.239) 0:03:21.742 ******** 2026-03-28 03:01:25.003457 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003465 | orchestrator | 2026-03-28 03:01:25.003472 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 03:01:25.003479 | orchestrator | Saturday 28 March 2026 03:01:17 +0000 (0:00:00.778) 0:03:22.520 ******** 2026-03-28 03:01:25.003486 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:25.003493 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:25.003500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:25.003519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:01:25.003531 | orchestrator | 2026-03-28 03:01:25.003547 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 03:01:25.003563 | orchestrator | Saturday 28 March 2026 03:01:18 +0000 (0:00:00.872) 0:03:23.393 ******** 2026-03-28 03:01:25.003575 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:25.003587 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:25.003598 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:25.003609 | orchestrator | 2026-03-28 03:01:25.003620 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 03:01:25.003632 | orchestrator | Saturday 28 March 2026 03:01:18 +0000 (0:00:00.560) 0:03:23.953 ******** 2026-03-28 03:01:25.003645 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:01:25.003658 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:01:25.003669 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:01:25.003683 | orchestrator | 2026-03-28 03:01:25.003691 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 03:01:25.003698 | orchestrator | Saturday 28 March 2026 03:01:20 +0000 (0:00:01.232) 0:03:25.186 ******** 2026-03-28 03:01:25.003705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:25.003712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:25.003719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:25.003726 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:25.003733 | orchestrator | 2026-03-28 03:01:25.003741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 03:01:25.003748 | orchestrator | Saturday 28 March 2026 03:01:20 +0000 (0:00:00.675) 0:03:25.861 ******** 2026-03-28 03:01:25.003755 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:25.003762 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:25.003769 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:25.003776 | orchestrator | 2026-03-28 03:01:25.003784 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 03:01:25.003791 | orchestrator | Saturday 28 March 2026 03:01:21 +0000 (0:00:00.340) 0:03:26.202 ******** 2026-03-28 03:01:25.003798 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:25.003805 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:25.003813 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:25.003829 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:01:25.003836 | orchestrator | 2026-03-28 03:01:25.003843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 03:01:25.003851 | orchestrator | Saturday 28 March 2026 03:01:22 +0000 (0:00:01.188) 0:03:27.390 ******** 2026-03-28 03:01:25.003858 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:25.003865 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:25.003872 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:25.003880 | orchestrator | 2026-03-28 03:01:25.003887 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 03:01:25.003894 | orchestrator | Saturday 28 March 2026 03:01:22 +0000 (0:00:00.356) 0:03:27.747 ******** 2026-03-28 03:01:25.003901 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:01:25.003909 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:01:25.003916 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:01:25.003923 | orchestrator | 2026-03-28 03:01:25.003931 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 03:01:25.003938 | orchestrator | Saturday 28 March 2026 03:01:23 +0000 (0:00:01.255) 0:03:29.003 ******** 2026-03-28 03:01:25.003945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:01:25.003952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:01:25.003972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:01:41.415725 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:41.415913 | orchestrator | 2026-03-28 03:01:41.415947 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 03:01:41.415970 | orchestrator | Saturday 28 March 2026 03:01:24 +0000 (0:00:01.147) 0:03:30.150 ******** 2026-03-28 03:01:41.415991 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:01:41.416013 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:01:41.416033 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:01:41.416088 | orchestrator | 2026-03-28 03:01:41.416109 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 03:01:41.416129 | orchestrator | Saturday 28 March 2026 03:01:25 +0000 (0:00:00.353) 0:03:30.504 ******** 2026-03-28 03:01:41.416149 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:41.416169 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:41.416190 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:41.416210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.416230 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.416252 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.416273 | orchestrator | 2026-03-28 03:01:41.416295 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 03:01:41.416318 | orchestrator | Saturday 28 March 2026 03:01:25 +0000 (0:00:00.629) 0:03:31.133 ******** 2026-03-28 03:01:41.416340 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:01:41.416362 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:01:41.416388 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:01:41.416411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:01:41.416457 | orchestrator | 2026-03-28 03:01:41.416479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 03:01:41.416501 | orchestrator | Saturday 28 March 2026 03:01:27 +0000 (0:00:01.124) 0:03:32.258 ******** 2026-03-28 03:01:41.416522 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.416544 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.416567 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.416588 | orchestrator | 2026-03-28 03:01:41.416610 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 03:01:41.416630 | orchestrator | Saturday 28 March 2026 03:01:27 +0000 (0:00:00.359) 0:03:32.618 ******** 2026-03-28 03:01:41.416651 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:01:41.416708 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:01:41.416753 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:01:41.416772 | orchestrator | 2026-03-28 03:01:41.416792 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 03:01:41.416813 | orchestrator | Saturday 28 March 2026 03:01:28 +0000 (0:00:01.509) 0:03:34.127 ******** 2026-03-28 03:01:41.416833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 03:01:41.416854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 03:01:41.416876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 03:01:41.416897 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.416917 | orchestrator | 2026-03-28 03:01:41.416937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 03:01:41.416958 | orchestrator | Saturday 28 March 2026 03:01:29 +0000 (0:00:00.718) 0:03:34.845 ******** 2026-03-28 03:01:41.416978 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.416998 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.417018 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.417037 | orchestrator | 2026-03-28 03:01:41.417083 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-28 03:01:41.417103 | orchestrator | 2026-03-28 03:01:41.417121 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:01:41.417141 | orchestrator | Saturday 28 March 2026 03:01:30 +0000 (0:00:00.603) 0:03:35.449 ******** 2026-03-28 03:01:41.417161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:01:41.417182 | orchestrator | 2026-03-28 03:01:41.417201 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:01:41.417220 | orchestrator | Saturday 28 March 2026 03:01:31 +0000 (0:00:00.815) 0:03:36.264 ******** 2026-03-28 03:01:41.417240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:01:41.417260 | orchestrator | 2026-03-28 03:01:41.417279 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:01:41.417299 | orchestrator | Saturday 28 March 2026 03:01:31 +0000 (0:00:00.584) 0:03:36.848 ******** 2026-03-28 03:01:41.417320 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.417339 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.417359 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.417380 | orchestrator | 2026-03-28 03:01:41.417399 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:01:41.417418 | orchestrator | Saturday 28 March 2026 03:01:32 +0000 (0:00:00.731) 0:03:37.580 ******** 2026-03-28 03:01:41.417438 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.417458 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.417478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.417497 | orchestrator | 2026-03-28 03:01:41.417517 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:01:41.417537 | orchestrator | Saturday 28 March 2026 03:01:32 +0000 (0:00:00.579) 0:03:38.160 ******** 2026-03-28 03:01:41.417557 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.417578 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.417598 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.417619 | orchestrator | 2026-03-28 03:01:41.417638 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:01:41.417656 | orchestrator | Saturday 28 March 2026 03:01:33 +0000 (0:00:00.322) 0:03:38.482 ******** 2026-03-28 03:01:41.417673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.417692 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.417729 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.417751 | orchestrator | 2026-03-28 03:01:41.417803 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:01:41.417823 | orchestrator | Saturday 28 March 2026 03:01:33 +0000 (0:00:00.314) 0:03:38.797 ******** 2026-03-28 03:01:41.417858 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.417877 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.417894 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.417911 | orchestrator | 2026-03-28 03:01:41.417929 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:01:41.417947 | orchestrator | Saturday 28 March 2026 03:01:34 +0000 (0:00:00.737) 0:03:39.534 ******** 2026-03-28 03:01:41.417966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.417984 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.418002 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.418138 | orchestrator | 2026-03-28 03:01:41.418167 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:01:41.418187 | orchestrator | Saturday 28 March 2026 03:01:35 +0000 (0:00:00.653) 0:03:40.188 ******** 2026-03-28 03:01:41.418204 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.418223 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.418241 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.418260 | orchestrator | 2026-03-28 03:01:41.418279 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:01:41.418298 | orchestrator | Saturday 28 March 2026 03:01:35 +0000 (0:00:00.337) 0:03:40.525 ******** 2026-03-28 03:01:41.418317 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.418335 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.418354 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.418372 | orchestrator | 2026-03-28 03:01:41.418390 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:01:41.418407 | orchestrator | Saturday 28 March 2026 03:01:36 +0000 (0:00:00.742) 0:03:41.268 ******** 2026-03-28 03:01:41.418423 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.418440 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.418457 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.418476 | orchestrator | 2026-03-28 03:01:41.418496 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:01:41.418514 | orchestrator | Saturday 28 March 2026 03:01:36 +0000 (0:00:00.739) 0:03:42.008 ******** 2026-03-28 03:01:41.418532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.418552 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.418571 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.418590 | orchestrator | 2026-03-28 03:01:41.418609 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:01:41.418627 | orchestrator | Saturday 28 March 2026 03:01:37 +0000 (0:00:00.646) 0:03:42.655 ******** 2026-03-28 03:01:41.418644 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.418663 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.418680 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.418698 | orchestrator | 2026-03-28 03:01:41.418716 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:01:41.418734 | orchestrator | Saturday 28 March 2026 03:01:37 +0000 (0:00:00.348) 0:03:43.004 ******** 2026-03-28 03:01:41.418751 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.418770 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.418789 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.418808 | orchestrator | 2026-03-28 03:01:41.418827 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:01:41.418845 | orchestrator | Saturday 28 March 2026 03:01:38 +0000 (0:00:00.354) 0:03:43.358 ******** 2026-03-28 03:01:41.418861 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.418880 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.418900 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.418919 | orchestrator | 2026-03-28 03:01:41.418938 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:01:41.418956 | orchestrator | Saturday 28 March 2026 03:01:38 +0000 (0:00:00.576) 0:03:43.934 ******** 2026-03-28 03:01:41.418973 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.419016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.419036 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.419088 | orchestrator | 2026-03-28 03:01:41.419108 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:01:41.419127 | orchestrator | Saturday 28 March 2026 03:01:39 +0000 (0:00:00.351) 0:03:44.286 ******** 2026-03-28 03:01:41.419145 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.419164 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.419181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.419198 | orchestrator | 2026-03-28 03:01:41.419216 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:01:41.419233 | orchestrator | Saturday 28 March 2026 03:01:39 +0000 (0:00:00.342) 0:03:44.628 ******** 2026-03-28 03:01:41.419251 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:01:41.419268 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:01:41.419284 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:01:41.419300 | orchestrator | 2026-03-28 03:01:41.419316 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:01:41.419332 | orchestrator | Saturday 28 March 2026 03:01:39 +0000 (0:00:00.324) 0:03:44.953 ******** 2026-03-28 03:01:41.419348 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.419365 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.419382 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.419399 | orchestrator | 2026-03-28 03:01:41.419416 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:01:41.419434 | orchestrator | Saturday 28 March 2026 03:01:40 +0000 (0:00:00.637) 0:03:45.591 ******** 2026-03-28 03:01:41.419452 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.419470 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.419487 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.419504 | orchestrator | 2026-03-28 03:01:41.419520 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:01:41.419537 | orchestrator | Saturday 28 March 2026 03:01:40 +0000 (0:00:00.395) 0:03:45.986 ******** 2026-03-28 03:01:41.419554 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:01:41.419571 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:01:41.419589 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:01:41.419606 | orchestrator | 2026-03-28 03:01:41.419639 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 03:01:41.419681 | orchestrator | Saturday 28 March 2026 03:01:41 +0000 (0:00:00.576) 0:03:46.562 ******** 2026-03-28 03:02:11.020112 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020196 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020202 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020207 | orchestrator | 2026-03-28 03:02:11.020213 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 03:02:11.020218 | orchestrator | Saturday 28 March 2026 03:01:41 +0000 (0:00:00.590) 0:03:47.152 ******** 2026-03-28 03:02:11.020223 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:11.020228 | orchestrator | 2026-03-28 03:02:11.020232 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 03:02:11.020236 | orchestrator | Saturday 28 March 2026 03:01:42 +0000 (0:00:00.618) 0:03:47.771 ******** 2026-03-28 03:02:11.020240 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:11.020245 | orchestrator | 2026-03-28 03:02:11.020249 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 03:02:11.020253 | orchestrator | Saturday 28 March 2026 03:01:42 +0000 (0:00:00.191) 0:03:47.962 ******** 2026-03-28 03:02:11.020257 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 03:02:11.020261 | orchestrator | 2026-03-28 03:02:11.020264 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 03:02:11.020268 | orchestrator | Saturday 28 March 2026 03:01:43 +0000 (0:00:01.166) 0:03:49.129 ******** 2026-03-28 03:02:11.020289 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020293 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020297 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020300 | orchestrator | 2026-03-28 03:02:11.020304 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 03:02:11.020308 | orchestrator | Saturday 28 March 2026 03:01:44 +0000 (0:00:00.639) 0:03:49.768 ******** 2026-03-28 03:02:11.020312 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020316 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020320 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020324 | orchestrator | 2026-03-28 03:02:11.020328 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 03:02:11.020331 | orchestrator | Saturday 28 March 2026 03:01:44 +0000 (0:00:00.384) 0:03:50.153 ******** 2026-03-28 03:02:11.020336 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020340 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020344 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020348 | orchestrator | 2026-03-28 03:02:11.020352 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 03:02:11.020356 | orchestrator | Saturday 28 March 2026 03:01:46 +0000 (0:00:01.297) 0:03:51.451 ******** 2026-03-28 03:02:11.020360 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020364 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020368 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020372 | orchestrator | 2026-03-28 03:02:11.020375 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 03:02:11.020379 | orchestrator | Saturday 28 March 2026 03:01:47 +0000 (0:00:00.811) 0:03:52.262 ******** 2026-03-28 03:02:11.020383 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020387 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020391 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020394 | orchestrator | 2026-03-28 03:02:11.020398 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 03:02:11.020402 | orchestrator | Saturday 28 March 2026 03:01:48 +0000 (0:00:01.004) 0:03:53.267 ******** 2026-03-28 03:02:11.020406 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020410 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020413 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020417 | orchestrator | 2026-03-28 03:02:11.020421 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 03:02:11.020425 | orchestrator | Saturday 28 March 2026 03:01:48 +0000 (0:00:00.682) 0:03:53.949 ******** 2026-03-28 03:02:11.020429 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020432 | orchestrator | 2026-03-28 03:02:11.020436 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 03:02:11.020440 | orchestrator | Saturday 28 March 2026 03:01:50 +0000 (0:00:01.400) 0:03:55.350 ******** 2026-03-28 03:02:11.020444 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020448 | orchestrator | 2026-03-28 03:02:11.020451 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 03:02:11.020455 | orchestrator | Saturday 28 March 2026 03:01:50 +0000 (0:00:00.669) 0:03:56.019 ******** 2026-03-28 03:02:11.020459 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:02:11.020463 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:02:11.020467 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:02:11.020471 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:02:11.020474 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 03:02:11.020478 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:02:11.020482 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:02:11.020486 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-28 03:02:11.020490 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:02:11.020497 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-28 03:02:11.020501 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 03:02:11.020505 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-28 03:02:11.020509 | orchestrator | 2026-03-28 03:02:11.020512 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 03:02:11.020516 | orchestrator | Saturday 28 March 2026 03:01:53 +0000 (0:00:03.083) 0:03:59.102 ******** 2026-03-28 03:02:11.020520 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020524 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020539 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020543 | orchestrator | 2026-03-28 03:02:11.020546 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 03:02:11.020561 | orchestrator | Saturday 28 March 2026 03:01:55 +0000 (0:00:01.244) 0:04:00.347 ******** 2026-03-28 03:02:11.020565 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020569 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020573 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020576 | orchestrator | 2026-03-28 03:02:11.020580 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 03:02:11.020584 | orchestrator | Saturday 28 March 2026 03:01:55 +0000 (0:00:00.590) 0:04:00.937 ******** 2026-03-28 03:02:11.020588 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020592 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:11.020595 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:11.020599 | orchestrator | 2026-03-28 03:02:11.020603 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 03:02:11.020607 | orchestrator | Saturday 28 March 2026 03:01:56 +0000 (0:00:00.356) 0:04:01.294 ******** 2026-03-28 03:02:11.020610 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020614 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020618 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020622 | orchestrator | 2026-03-28 03:02:11.020626 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 03:02:11.020630 | orchestrator | Saturday 28 March 2026 03:01:57 +0000 (0:00:01.543) 0:04:02.837 ******** 2026-03-28 03:02:11.020633 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020637 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020641 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020645 | orchestrator | 2026-03-28 03:02:11.020648 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 03:02:11.020652 | orchestrator | Saturday 28 March 2026 03:01:59 +0000 (0:00:01.670) 0:04:04.507 ******** 2026-03-28 03:02:11.020656 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:11.020660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:11.020663 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:11.020667 | orchestrator | 2026-03-28 03:02:11.020671 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 03:02:11.020675 | orchestrator | Saturday 28 March 2026 03:01:59 +0000 (0:00:00.318) 0:04:04.826 ******** 2026-03-28 03:02:11.020679 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:11.020682 | orchestrator | 2026-03-28 03:02:11.020688 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 03:02:11.020692 | orchestrator | Saturday 28 March 2026 03:02:00 +0000 (0:00:00.535) 0:04:05.362 ******** 2026-03-28 03:02:11.020697 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:11.020701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:11.020705 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:11.020710 | orchestrator | 2026-03-28 03:02:11.020714 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 03:02:11.020718 | orchestrator | Saturday 28 March 2026 03:02:00 +0000 (0:00:00.550) 0:04:05.913 ******** 2026-03-28 03:02:11.020723 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:11.020735 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:11.020739 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:11.020744 | orchestrator | 2026-03-28 03:02:11.020748 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 03:02:11.020753 | orchestrator | Saturday 28 March 2026 03:02:01 +0000 (0:00:00.344) 0:04:06.257 ******** 2026-03-28 03:02:11.020758 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:11.020763 | orchestrator | 2026-03-28 03:02:11.020768 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 03:02:11.020772 | orchestrator | Saturday 28 March 2026 03:02:01 +0000 (0:00:00.526) 0:04:06.783 ******** 2026-03-28 03:02:11.020776 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020781 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020785 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020790 | orchestrator | 2026-03-28 03:02:11.020794 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 03:02:11.020799 | orchestrator | Saturday 28 March 2026 03:02:03 +0000 (0:00:02.147) 0:04:08.931 ******** 2026-03-28 03:02:11.020803 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020808 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020812 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020817 | orchestrator | 2026-03-28 03:02:11.020821 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 03:02:11.020825 | orchestrator | Saturday 28 March 2026 03:02:05 +0000 (0:00:01.278) 0:04:10.209 ******** 2026-03-28 03:02:11.020830 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020834 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020839 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020844 | orchestrator | 2026-03-28 03:02:11.020848 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 03:02:11.020853 | orchestrator | Saturday 28 March 2026 03:02:06 +0000 (0:00:01.888) 0:04:12.098 ******** 2026-03-28 03:02:11.020857 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:02:11.020862 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:02:11.020866 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:02:11.020870 | orchestrator | 2026-03-28 03:02:11.020875 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 03:02:11.020880 | orchestrator | Saturday 28 March 2026 03:02:08 +0000 (0:00:01.979) 0:04:14.077 ******** 2026-03-28 03:02:11.020884 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:11.020889 | orchestrator | 2026-03-28 03:02:11.020893 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 03:02:11.020898 | orchestrator | Saturday 28 March 2026 03:02:09 +0000 (0:00:00.884) 0:04:14.962 ******** 2026-03-28 03:02:11.020902 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:11.020907 | orchestrator | 2026-03-28 03:02:11.020915 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 03:02:11.020922 | orchestrator | Saturday 28 March 2026 03:02:10 +0000 (0:00:01.200) 0:04:16.163 ******** 2026-03-28 03:02:48.251124 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.251260 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.251274 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.251284 | orchestrator | 2026-03-28 03:02:48.251295 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 03:02:48.251306 | orchestrator | Saturday 28 March 2026 03:02:20 +0000 (0:00:09.638) 0:04:25.801 ******** 2026-03-28 03:02:48.251316 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.251326 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.251335 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.251344 | orchestrator | 2026-03-28 03:02:48.251353 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 03:02:48.251362 | orchestrator | Saturday 28 March 2026 03:02:20 +0000 (0:00:00.315) 0:04:26.117 ******** 2026-03-28 03:02:48.251399 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 03:02:48.251412 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 03:02:48.251423 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 03:02:48.251433 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 03:02:48.251444 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 03:02:48.251455 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__91ee27a7d9951228a794aaa8212ad6e868ace516'}])  2026-03-28 03:02:48.251466 | orchestrator | 2026-03-28 03:02:48.251475 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:02:48.251484 | orchestrator | Saturday 28 March 2026 03:02:36 +0000 (0:00:15.957) 0:04:42.075 ******** 2026-03-28 03:02:48.251493 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.251501 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.251510 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.251519 | orchestrator | 2026-03-28 03:02:48.251528 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 03:02:48.251537 | orchestrator | Saturday 28 March 2026 03:02:37 +0000 (0:00:00.353) 0:04:42.429 ******** 2026-03-28 03:02:48.251546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:48.251555 | orchestrator | 2026-03-28 03:02:48.251564 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 03:02:48.251573 | orchestrator | Saturday 28 March 2026 03:02:38 +0000 (0:00:00.820) 0:04:43.249 ******** 2026-03-28 03:02:48.251582 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.251590 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.251599 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.251609 | orchestrator | 2026-03-28 03:02:48.251617 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 03:02:48.251626 | orchestrator | Saturday 28 March 2026 03:02:38 +0000 (0:00:00.348) 0:04:43.598 ******** 2026-03-28 03:02:48.251643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.251654 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.251665 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.251675 | orchestrator | 2026-03-28 03:02:48.251703 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 03:02:48.251731 | orchestrator | Saturday 28 March 2026 03:02:38 +0000 (0:00:00.349) 0:04:43.947 ******** 2026-03-28 03:02:48.251743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 03:02:48.251753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 03:02:48.251763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 03:02:48.251773 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.251783 | orchestrator | 2026-03-28 03:02:48.251794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 03:02:48.251804 | orchestrator | Saturday 28 March 2026 03:02:39 +0000 (0:00:00.935) 0:04:44.883 ******** 2026-03-28 03:02:48.251813 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.251824 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.251834 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.251845 | orchestrator | 2026-03-28 03:02:48.251855 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-28 03:02:48.251865 | orchestrator | 2026-03-28 03:02:48.251889 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:02:48.251912 | orchestrator | Saturday 28 March 2026 03:02:40 +0000 (0:00:00.888) 0:04:45.772 ******** 2026-03-28 03:02:48.251924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:48.251944 | orchestrator | 2026-03-28 03:02:48.251954 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:02:48.251963 | orchestrator | Saturday 28 March 2026 03:02:41 +0000 (0:00:00.575) 0:04:46.348 ******** 2026-03-28 03:02:48.251971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:02:48.251980 | orchestrator | 2026-03-28 03:02:48.251989 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:02:48.251998 | orchestrator | Saturday 28 March 2026 03:02:41 +0000 (0:00:00.805) 0:04:47.153 ******** 2026-03-28 03:02:48.252007 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.252016 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.252025 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.252033 | orchestrator | 2026-03-28 03:02:48.252042 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:02:48.252051 | orchestrator | Saturday 28 March 2026 03:02:42 +0000 (0:00:00.759) 0:04:47.913 ******** 2026-03-28 03:02:48.252060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252069 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252078 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252087 | orchestrator | 2026-03-28 03:02:48.252096 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:02:48.252104 | orchestrator | Saturday 28 March 2026 03:02:43 +0000 (0:00:00.339) 0:04:48.252 ******** 2026-03-28 03:02:48.252114 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252131 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252140 | orchestrator | 2026-03-28 03:02:48.252149 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:02:48.252158 | orchestrator | Saturday 28 March 2026 03:02:43 +0000 (0:00:00.609) 0:04:48.861 ******** 2026-03-28 03:02:48.252167 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252175 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252184 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252193 | orchestrator | 2026-03-28 03:02:48.252202 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:02:48.252217 | orchestrator | Saturday 28 March 2026 03:02:44 +0000 (0:00:00.379) 0:04:49.241 ******** 2026-03-28 03:02:48.252227 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.252235 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.252244 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.252253 | orchestrator | 2026-03-28 03:02:48.252262 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:02:48.252271 | orchestrator | Saturday 28 March 2026 03:02:44 +0000 (0:00:00.820) 0:04:50.061 ******** 2026-03-28 03:02:48.252280 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252289 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252298 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252306 | orchestrator | 2026-03-28 03:02:48.252315 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:02:48.252324 | orchestrator | Saturday 28 March 2026 03:02:45 +0000 (0:00:00.365) 0:04:50.427 ******** 2026-03-28 03:02:48.252333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252342 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252351 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252359 | orchestrator | 2026-03-28 03:02:48.252368 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:02:48.252377 | orchestrator | Saturday 28 March 2026 03:02:45 +0000 (0:00:00.638) 0:04:51.065 ******** 2026-03-28 03:02:48.252386 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.252395 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.252404 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.252412 | orchestrator | 2026-03-28 03:02:48.252421 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:02:48.252430 | orchestrator | Saturday 28 March 2026 03:02:46 +0000 (0:00:00.789) 0:04:51.855 ******** 2026-03-28 03:02:48.252439 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:02:48.252448 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:02:48.252467 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:02:48.252476 | orchestrator | 2026-03-28 03:02:48.252494 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:02:48.252503 | orchestrator | Saturday 28 March 2026 03:02:47 +0000 (0:00:00.745) 0:04:52.600 ******** 2026-03-28 03:02:48.252512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:02:48.252521 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:02:48.252530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:02:48.252539 | orchestrator | 2026-03-28 03:02:48.252553 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:02:48.252562 | orchestrator | Saturday 28 March 2026 03:02:47 +0000 (0:00:00.305) 0:04:52.906 ******** 2026-03-28 03:02:48.252576 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.490608 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.490703 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.490712 | orchestrator | 2026-03-28 03:03:21.490721 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:03:21.490729 | orchestrator | Saturday 28 March 2026 03:02:48 +0000 (0:00:00.650) 0:04:53.556 ******** 2026-03-28 03:03:21.490735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.490743 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.490750 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.490755 | orchestrator | 2026-03-28 03:03:21.490762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:03:21.490768 | orchestrator | Saturday 28 March 2026 03:02:48 +0000 (0:00:00.331) 0:04:53.888 ******** 2026-03-28 03:03:21.490774 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.490779 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.490786 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.490791 | orchestrator | 2026-03-28 03:03:21.490797 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:03:21.490803 | orchestrator | Saturday 28 March 2026 03:02:49 +0000 (0:00:00.308) 0:04:54.196 ******** 2026-03-28 03:03:21.490853 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.490860 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.490866 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.490872 | orchestrator | 2026-03-28 03:03:21.490878 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:03:21.490883 | orchestrator | Saturday 28 March 2026 03:02:49 +0000 (0:00:00.623) 0:04:54.819 ******** 2026-03-28 03:03:21.490889 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.490895 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.490901 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.490907 | orchestrator | 2026-03-28 03:03:21.490913 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:03:21.490919 | orchestrator | Saturday 28 March 2026 03:02:49 +0000 (0:00:00.322) 0:04:55.142 ******** 2026-03-28 03:03:21.490925 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.490931 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.490938 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.490942 | orchestrator | 2026-03-28 03:03:21.490946 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:03:21.490950 | orchestrator | Saturday 28 March 2026 03:02:50 +0000 (0:00:00.317) 0:04:55.460 ******** 2026-03-28 03:03:21.490954 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.490958 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.490961 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.490965 | orchestrator | 2026-03-28 03:03:21.490969 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:03:21.490973 | orchestrator | Saturday 28 March 2026 03:02:50 +0000 (0:00:00.350) 0:04:55.811 ******** 2026-03-28 03:03:21.490977 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.490981 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.490985 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.490988 | orchestrator | 2026-03-28 03:03:21.490992 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:03:21.490996 | orchestrator | Saturday 28 March 2026 03:02:51 +0000 (0:00:00.637) 0:04:56.448 ******** 2026-03-28 03:03:21.491000 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.491004 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.491007 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.491011 | orchestrator | 2026-03-28 03:03:21.491015 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 03:03:21.491019 | orchestrator | Saturday 28 March 2026 03:02:51 +0000 (0:00:00.592) 0:04:57.041 ******** 2026-03-28 03:03:21.491023 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 03:03:21.491027 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:03:21.491031 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:03:21.491035 | orchestrator | 2026-03-28 03:03:21.491039 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 03:03:21.491043 | orchestrator | Saturday 28 March 2026 03:02:52 +0000 (0:00:00.992) 0:04:58.033 ******** 2026-03-28 03:03:21.491047 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:03:21.491052 | orchestrator | 2026-03-28 03:03:21.491055 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 03:03:21.491059 | orchestrator | Saturday 28 March 2026 03:02:53 +0000 (0:00:00.796) 0:04:58.829 ******** 2026-03-28 03:03:21.491063 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:03:21.491067 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:03:21.491071 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:03:21.491074 | orchestrator | 2026-03-28 03:03:21.491078 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 03:03:21.491082 | orchestrator | Saturday 28 March 2026 03:02:54 +0000 (0:00:00.709) 0:04:59.539 ******** 2026-03-28 03:03:21.491092 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.491096 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.491100 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.491103 | orchestrator | 2026-03-28 03:03:21.491107 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 03:03:21.491111 | orchestrator | Saturday 28 March 2026 03:02:54 +0000 (0:00:00.350) 0:04:59.890 ******** 2026-03-28 03:03:21.491115 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:03:21.491120 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:03:21.491123 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:03:21.491127 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-28 03:03:21.491131 | orchestrator | 2026-03-28 03:03:21.491135 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 03:03:21.491149 | orchestrator | Saturday 28 March 2026 03:03:06 +0000 (0:00:11.772) 0:05:11.663 ******** 2026-03-28 03:03:21.491154 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.491158 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.491173 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.491178 | orchestrator | 2026-03-28 03:03:21.491183 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 03:03:21.491188 | orchestrator | Saturday 28 March 2026 03:03:07 +0000 (0:00:00.687) 0:05:12.350 ******** 2026-03-28 03:03:21.491192 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 03:03:21.491197 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 03:03:21.491201 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 03:03:21.491206 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 03:03:21.491210 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:03:21.491215 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:03:21.491219 | orchestrator | 2026-03-28 03:03:21.491224 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 03:03:21.491228 | orchestrator | Saturday 28 March 2026 03:03:09 +0000 (0:00:02.333) 0:05:14.683 ******** 2026-03-28 03:03:21.491233 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 03:03:21.491238 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 03:03:21.491242 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 03:03:21.491246 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:03:21.491251 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 03:03:21.491255 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 03:03:21.491260 | orchestrator | 2026-03-28 03:03:21.491264 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 03:03:21.491268 | orchestrator | Saturday 28 March 2026 03:03:10 +0000 (0:00:01.334) 0:05:16.018 ******** 2026-03-28 03:03:21.491273 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:03:21.491277 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:03:21.491282 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:03:21.491286 | orchestrator | 2026-03-28 03:03:21.491290 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 03:03:21.491295 | orchestrator | Saturday 28 March 2026 03:03:11 +0000 (0:00:00.696) 0:05:16.715 ******** 2026-03-28 03:03:21.491299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.491304 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.491308 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.491312 | orchestrator | 2026-03-28 03:03:21.491317 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 03:03:21.491321 | orchestrator | Saturday 28 March 2026 03:03:12 +0000 (0:00:00.708) 0:05:17.423 ******** 2026-03-28 03:03:21.491325 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.491330 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.491338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.491343 | orchestrator | 2026-03-28 03:03:21.491348 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 03:03:21.491352 | orchestrator | Saturday 28 March 2026 03:03:12 +0000 (0:00:00.328) 0:05:17.752 ******** 2026-03-28 03:03:21.491357 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:03:21.491361 | orchestrator | 2026-03-28 03:03:21.491365 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 03:03:21.491370 | orchestrator | Saturday 28 March 2026 03:03:13 +0000 (0:00:00.532) 0:05:18.284 ******** 2026-03-28 03:03:21.491374 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.491379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.491383 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.491388 | orchestrator | 2026-03-28 03:03:21.491392 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 03:03:21.491397 | orchestrator | Saturday 28 March 2026 03:03:13 +0000 (0:00:00.586) 0:05:18.871 ******** 2026-03-28 03:03:21.491401 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:03:21.491406 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:03:21.491410 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:03:21.491414 | orchestrator | 2026-03-28 03:03:21.491419 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 03:03:21.491423 | orchestrator | Saturday 28 March 2026 03:03:14 +0000 (0:00:00.373) 0:05:19.244 ******** 2026-03-28 03:03:21.491428 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:03:21.491432 | orchestrator | 2026-03-28 03:03:21.491437 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 03:03:21.491441 | orchestrator | Saturday 28 March 2026 03:03:14 +0000 (0:00:00.589) 0:05:19.834 ******** 2026-03-28 03:03:21.491445 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:03:21.491450 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:03:21.491454 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:03:21.491459 | orchestrator | 2026-03-28 03:03:21.491463 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 03:03:21.491467 | orchestrator | Saturday 28 March 2026 03:03:16 +0000 (0:00:01.800) 0:05:21.635 ******** 2026-03-28 03:03:21.491472 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:03:21.491476 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:03:21.491480 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:03:21.491485 | orchestrator | 2026-03-28 03:03:21.491489 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 03:03:21.491494 | orchestrator | Saturday 28 March 2026 03:03:17 +0000 (0:00:01.191) 0:05:22.826 ******** 2026-03-28 03:03:21.491499 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:03:21.491503 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:03:21.491507 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:03:21.491512 | orchestrator | 2026-03-28 03:03:21.491516 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 03:03:21.491521 | orchestrator | Saturday 28 March 2026 03:03:19 +0000 (0:00:01.774) 0:05:24.601 ******** 2026-03-28 03:03:21.491525 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:03:21.491532 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:03:21.491536 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:03:21.491539 | orchestrator | 2026-03-28 03:03:21.491546 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 03:04:20.084162 | orchestrator | Saturday 28 March 2026 03:03:21 +0000 (0:00:02.038) 0:05:26.639 ******** 2026-03-28 03:04:20.084277 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:04:20.084294 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:04:20.084306 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-28 03:04:20.084318 | orchestrator | 2026-03-28 03:04:20.084331 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-28 03:04:20.084366 | orchestrator | Saturday 28 March 2026 03:03:22 +0000 (0:00:00.757) 0:05:27.397 ******** 2026-03-28 03:04:20.084378 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-28 03:04:20.084391 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-28 03:04:20.084402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-28 03:04:20.084413 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-28 03:04:20.084424 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-28 03:04:20.084435 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:04:20.084446 | orchestrator | 2026-03-28 03:04:20.084458 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-28 03:04:20.084469 | orchestrator | Saturday 28 March 2026 03:03:52 +0000 (0:00:30.418) 0:05:57.815 ******** 2026-03-28 03:04:20.084480 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:04:20.084491 | orchestrator | 2026-03-28 03:04:20.084502 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-28 03:04:20.084513 | orchestrator | Saturday 28 March 2026 03:03:54 +0000 (0:00:01.380) 0:05:59.196 ******** 2026-03-28 03:04:20.084524 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:04:20.084536 | orchestrator | 2026-03-28 03:04:20.084547 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-28 03:04:20.084558 | orchestrator | Saturday 28 March 2026 03:03:54 +0000 (0:00:00.349) 0:05:59.546 ******** 2026-03-28 03:04:20.084569 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:04:20.084580 | orchestrator | 2026-03-28 03:04:20.084591 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-28 03:04:20.084602 | orchestrator | Saturday 28 March 2026 03:03:54 +0000 (0:00:00.169) 0:05:59.716 ******** 2026-03-28 03:04:20.084613 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-28 03:04:20.084624 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-28 03:04:20.084635 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-28 03:04:20.084646 | orchestrator | 2026-03-28 03:04:20.084657 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-28 03:04:20.084668 | orchestrator | Saturday 28 March 2026 03:04:01 +0000 (0:00:06.544) 0:06:06.261 ******** 2026-03-28 03:04:20.084679 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-28 03:04:20.084692 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-28 03:04:20.084706 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-28 03:04:20.084748 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-28 03:04:20.084762 | orchestrator | 2026-03-28 03:04:20.084776 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:04:20.084789 | orchestrator | Saturday 28 March 2026 03:04:06 +0000 (0:00:05.485) 0:06:11.746 ******** 2026-03-28 03:04:20.084802 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:04:20.084815 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:04:20.084828 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:04:20.084841 | orchestrator | 2026-03-28 03:04:20.084855 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 03:04:20.084868 | orchestrator | Saturday 28 March 2026 03:04:07 +0000 (0:00:00.707) 0:06:12.454 ******** 2026-03-28 03:04:20.084882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:04:20.084895 | orchestrator | 2026-03-28 03:04:20.084917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 03:04:20.084930 | orchestrator | Saturday 28 March 2026 03:04:08 +0000 (0:00:00.817) 0:06:13.271 ******** 2026-03-28 03:04:20.084943 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:04:20.084955 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:04:20.084968 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:04:20.084980 | orchestrator | 2026-03-28 03:04:20.084993 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 03:04:20.085006 | orchestrator | Saturday 28 March 2026 03:04:08 +0000 (0:00:00.391) 0:06:13.663 ******** 2026-03-28 03:04:20.085019 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:04:20.085033 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:04:20.085044 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:04:20.085054 | orchestrator | 2026-03-28 03:04:20.085065 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 03:04:20.085076 | orchestrator | Saturday 28 March 2026 03:04:09 +0000 (0:00:01.271) 0:06:14.935 ******** 2026-03-28 03:04:20.085086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 03:04:20.085097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 03:04:20.085124 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 03:04:20.085135 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:04:20.085146 | orchestrator | 2026-03-28 03:04:20.085174 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 03:04:20.085186 | orchestrator | Saturday 28 March 2026 03:04:10 +0000 (0:00:00.975) 0:06:15.910 ******** 2026-03-28 03:04:20.085197 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:04:20.085208 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:04:20.085219 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:04:20.085229 | orchestrator | 2026-03-28 03:04:20.085241 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-28 03:04:20.085251 | orchestrator | 2026-03-28 03:04:20.085263 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:04:20.085274 | orchestrator | Saturday 28 March 2026 03:04:11 +0000 (0:00:01.040) 0:06:16.951 ******** 2026-03-28 03:04:20.085285 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:04:20.085297 | orchestrator | 2026-03-28 03:04:20.085308 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:04:20.085319 | orchestrator | Saturday 28 March 2026 03:04:12 +0000 (0:00:00.531) 0:06:17.482 ******** 2026-03-28 03:04:20.085330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:04:20.085342 | orchestrator | 2026-03-28 03:04:20.085353 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:04:20.085365 | orchestrator | Saturday 28 March 2026 03:04:13 +0000 (0:00:00.792) 0:06:18.275 ******** 2026-03-28 03:04:20.085375 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.085386 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.085397 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.085408 | orchestrator | 2026-03-28 03:04:20.085419 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:04:20.085430 | orchestrator | Saturday 28 March 2026 03:04:13 +0000 (0:00:00.335) 0:06:18.611 ******** 2026-03-28 03:04:20.085440 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.085451 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.085462 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.085474 | orchestrator | 2026-03-28 03:04:20.085485 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:04:20.085496 | orchestrator | Saturday 28 March 2026 03:04:14 +0000 (0:00:00.701) 0:06:19.312 ******** 2026-03-28 03:04:20.085507 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.085518 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.085537 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.085548 | orchestrator | 2026-03-28 03:04:20.085560 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:04:20.085571 | orchestrator | Saturday 28 March 2026 03:04:14 +0000 (0:00:00.704) 0:06:20.016 ******** 2026-03-28 03:04:20.085582 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.085592 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.085603 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.085615 | orchestrator | 2026-03-28 03:04:20.085626 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:04:20.085638 | orchestrator | Saturday 28 March 2026 03:04:15 +0000 (0:00:01.026) 0:06:21.043 ******** 2026-03-28 03:04:20.085649 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.085660 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.085671 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.085682 | orchestrator | 2026-03-28 03:04:20.085694 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:04:20.085705 | orchestrator | Saturday 28 March 2026 03:04:16 +0000 (0:00:00.355) 0:06:21.399 ******** 2026-03-28 03:04:20.085742 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.085759 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.085771 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.085782 | orchestrator | 2026-03-28 03:04:20.085793 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:04:20.085804 | orchestrator | Saturday 28 March 2026 03:04:16 +0000 (0:00:00.325) 0:06:21.724 ******** 2026-03-28 03:04:20.085815 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.085827 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.085839 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.085850 | orchestrator | 2026-03-28 03:04:20.085861 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:04:20.085873 | orchestrator | Saturday 28 March 2026 03:04:16 +0000 (0:00:00.330) 0:06:22.054 ******** 2026-03-28 03:04:20.085884 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.085896 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.085907 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.085918 | orchestrator | 2026-03-28 03:04:20.085930 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:04:20.085942 | orchestrator | Saturday 28 March 2026 03:04:17 +0000 (0:00:01.078) 0:06:23.133 ******** 2026-03-28 03:04:20.085953 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.085964 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.085976 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.085987 | orchestrator | 2026-03-28 03:04:20.085999 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:04:20.086010 | orchestrator | Saturday 28 March 2026 03:04:18 +0000 (0:00:00.731) 0:06:23.864 ******** 2026-03-28 03:04:20.086107 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.086120 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.086131 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.086142 | orchestrator | 2026-03-28 03:04:20.086154 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:04:20.086165 | orchestrator | Saturday 28 March 2026 03:04:19 +0000 (0:00:00.339) 0:06:24.204 ******** 2026-03-28 03:04:20.086176 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:04:20.086187 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:04:20.086197 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:04:20.086208 | orchestrator | 2026-03-28 03:04:20.086219 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:04:20.086256 | orchestrator | Saturday 28 March 2026 03:04:19 +0000 (0:00:00.360) 0:06:24.564 ******** 2026-03-28 03:04:20.086276 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:04:20.086288 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:04:20.086299 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:04:20.086310 | orchestrator | 2026-03-28 03:04:20.086340 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:05:23.372722 | orchestrator | Saturday 28 March 2026 03:04:20 +0000 (0:00:00.664) 0:06:25.229 ******** 2026-03-28 03:05:23.372865 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.372883 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.372897 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.372909 | orchestrator | 2026-03-28 03:05:23.372921 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:05:23.372981 | orchestrator | Saturday 28 March 2026 03:04:20 +0000 (0:00:00.420) 0:06:25.649 ******** 2026-03-28 03:05:23.372994 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373007 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373018 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373030 | orchestrator | 2026-03-28 03:05:23.373042 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:05:23.373053 | orchestrator | Saturday 28 March 2026 03:04:20 +0000 (0:00:00.441) 0:06:26.091 ******** 2026-03-28 03:05:23.373064 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.373077 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.373088 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.373099 | orchestrator | 2026-03-28 03:05:23.373110 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:05:23.373121 | orchestrator | Saturday 28 March 2026 03:04:21 +0000 (0:00:00.312) 0:06:26.403 ******** 2026-03-28 03:05:23.373135 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.373148 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.373161 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.373173 | orchestrator | 2026-03-28 03:05:23.373186 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:05:23.373199 | orchestrator | Saturday 28 March 2026 03:04:21 +0000 (0:00:00.636) 0:06:27.040 ******** 2026-03-28 03:05:23.373212 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.373225 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.373238 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.373249 | orchestrator | 2026-03-28 03:05:23.373260 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:05:23.373271 | orchestrator | Saturday 28 March 2026 03:04:22 +0000 (0:00:00.353) 0:06:27.394 ******** 2026-03-28 03:05:23.373282 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373294 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373305 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373316 | orchestrator | 2026-03-28 03:05:23.373327 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:05:23.373338 | orchestrator | Saturday 28 March 2026 03:04:22 +0000 (0:00:00.406) 0:06:27.800 ******** 2026-03-28 03:05:23.373349 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373361 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373372 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373383 | orchestrator | 2026-03-28 03:05:23.373393 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 03:05:23.373405 | orchestrator | Saturday 28 March 2026 03:04:23 +0000 (0:00:00.826) 0:06:28.627 ******** 2026-03-28 03:05:23.373416 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373427 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373438 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373449 | orchestrator | 2026-03-28 03:05:23.373459 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 03:05:23.373471 | orchestrator | Saturday 28 March 2026 03:04:23 +0000 (0:00:00.362) 0:06:28.989 ******** 2026-03-28 03:05:23.373482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 03:05:23.373493 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:05:23.373505 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:05:23.373539 | orchestrator | 2026-03-28 03:05:23.373551 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 03:05:23.373562 | orchestrator | Saturday 28 March 2026 03:04:24 +0000 (0:00:00.948) 0:06:29.938 ******** 2026-03-28 03:05:23.373574 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:05:23.373585 | orchestrator | 2026-03-28 03:05:23.373596 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 03:05:23.373607 | orchestrator | Saturday 28 March 2026 03:04:25 +0000 (0:00:00.836) 0:06:30.775 ******** 2026-03-28 03:05:23.373648 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.373660 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.373671 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.373682 | orchestrator | 2026-03-28 03:05:23.373693 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 03:05:23.373704 | orchestrator | Saturday 28 March 2026 03:04:25 +0000 (0:00:00.336) 0:06:31.111 ******** 2026-03-28 03:05:23.373715 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.373741 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.373752 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.373763 | orchestrator | 2026-03-28 03:05:23.373774 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 03:05:23.373785 | orchestrator | Saturday 28 March 2026 03:04:26 +0000 (0:00:00.336) 0:06:31.447 ******** 2026-03-28 03:05:23.373796 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373807 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373818 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373829 | orchestrator | 2026-03-28 03:05:23.373840 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 03:05:23.373850 | orchestrator | Saturday 28 March 2026 03:04:26 +0000 (0:00:00.654) 0:06:32.102 ******** 2026-03-28 03:05:23.373861 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:05:23.373872 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:05:23.373883 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:05:23.373894 | orchestrator | 2026-03-28 03:05:23.373920 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 03:05:23.373932 | orchestrator | Saturday 28 March 2026 03:04:27 +0000 (0:00:00.685) 0:06:32.788 ******** 2026-03-28 03:05:23.373964 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 03:05:23.373977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 03:05:23.373988 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 03:05:23.373999 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 03:05:23.374011 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 03:05:23.374081 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 03:05:23.374092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 03:05:23.374103 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 03:05:23.374114 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 03:05:23.374125 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 03:05:23.374136 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 03:05:23.374147 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 03:05:23.374158 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 03:05:23.374169 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 03:05:23.374191 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 03:05:23.374202 | orchestrator | 2026-03-28 03:05:23.374213 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 03:05:23.374224 | orchestrator | Saturday 28 March 2026 03:04:30 +0000 (0:00:03.214) 0:06:36.002 ******** 2026-03-28 03:05:23.374235 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:05:23.374246 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:05:23.374256 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:05:23.374267 | orchestrator | 2026-03-28 03:05:23.374278 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 03:05:23.374289 | orchestrator | Saturday 28 March 2026 03:04:31 +0000 (0:00:00.336) 0:06:36.339 ******** 2026-03-28 03:05:23.374299 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:05:23.374310 | orchestrator | 2026-03-28 03:05:23.374321 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 03:05:23.374332 | orchestrator | Saturday 28 March 2026 03:04:32 +0000 (0:00:00.836) 0:06:37.176 ******** 2026-03-28 03:05:23.374343 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 03:05:23.374354 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 03:05:23.374364 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 03:05:23.374375 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-28 03:05:23.374386 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-28 03:05:23.374397 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-28 03:05:23.374408 | orchestrator | 2026-03-28 03:05:23.374419 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 03:05:23.374430 | orchestrator | Saturday 28 March 2026 03:04:33 +0000 (0:00:00.996) 0:06:38.172 ******** 2026-03-28 03:05:23.374440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:05:23.374451 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:05:23.374462 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:05:23.374473 | orchestrator | 2026-03-28 03:05:23.374484 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 03:05:23.374495 | orchestrator | Saturday 28 March 2026 03:04:35 +0000 (0:00:02.356) 0:06:40.529 ******** 2026-03-28 03:05:23.374506 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:05:23.374517 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:05:23.374528 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:05:23.374538 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:05:23.374549 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 03:05:23.374560 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:05:23.374570 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:05:23.374581 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 03:05:23.374592 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:05:23.374603 | orchestrator | 2026-03-28 03:05:23.374636 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 03:05:23.374648 | orchestrator | Saturday 28 March 2026 03:04:36 +0000 (0:00:01.193) 0:06:41.723 ******** 2026-03-28 03:05:23.374659 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:05:23.374670 | orchestrator | 2026-03-28 03:05:23.374681 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 03:05:23.374691 | orchestrator | Saturday 28 March 2026 03:04:38 +0000 (0:00:02.267) 0:06:43.991 ******** 2026-03-28 03:05:23.374709 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:05:23.374721 | orchestrator | 2026-03-28 03:05:23.374739 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-28 03:05:23.374750 | orchestrator | Saturday 28 March 2026 03:04:39 +0000 (0:00:01.011) 0:06:45.003 ******** 2026-03-28 03:05:23.374770 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}) 2026-03-28 03:06:01.646879 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}) 2026-03-28 03:06:01.646978 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}) 2026-03-28 03:06:01.646990 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}) 2026-03-28 03:06:01.646998 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}) 2026-03-28 03:06:01.647006 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}) 2026-03-28 03:06:01.647014 | orchestrator | 2026-03-28 03:06:01.647023 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 03:06:01.647032 | orchestrator | Saturday 28 March 2026 03:05:23 +0000 (0:00:43.519) 0:07:28.522 ******** 2026-03-28 03:06:01.647040 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647049 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647057 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.647064 | orchestrator | 2026-03-28 03:06:01.647071 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 03:06:01.647079 | orchestrator | Saturday 28 March 2026 03:05:23 +0000 (0:00:00.344) 0:07:28.867 ******** 2026-03-28 03:06:01.647087 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:06:01.647095 | orchestrator | 2026-03-28 03:06:01.647102 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 03:06:01.647110 | orchestrator | Saturday 28 March 2026 03:05:24 +0000 (0:00:00.866) 0:07:29.733 ******** 2026-03-28 03:06:01.647118 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:01.647126 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:01.647133 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:01.647141 | orchestrator | 2026-03-28 03:06:01.647148 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 03:06:01.647156 | orchestrator | Saturday 28 March 2026 03:05:25 +0000 (0:00:00.705) 0:07:30.439 ******** 2026-03-28 03:06:01.647164 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:01.647172 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:01.647180 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:01.647187 | orchestrator | 2026-03-28 03:06:01.647194 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 03:06:01.647202 | orchestrator | Saturday 28 March 2026 03:05:27 +0000 (0:00:02.648) 0:07:33.087 ******** 2026-03-28 03:06:01.647209 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:06:01.647218 | orchestrator | 2026-03-28 03:06:01.647225 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 03:06:01.647233 | orchestrator | Saturday 28 March 2026 03:05:28 +0000 (0:00:00.866) 0:07:33.954 ******** 2026-03-28 03:06:01.647241 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:01.647248 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:01.647255 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:01.647263 | orchestrator | 2026-03-28 03:06:01.647270 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 03:06:01.647278 | orchestrator | Saturday 28 March 2026 03:05:29 +0000 (0:00:01.204) 0:07:35.158 ******** 2026-03-28 03:06:01.647305 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:01.647313 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:01.647320 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:01.647328 | orchestrator | 2026-03-28 03:06:01.647335 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 03:06:01.647343 | orchestrator | Saturday 28 March 2026 03:05:31 +0000 (0:00:01.176) 0:07:36.334 ******** 2026-03-28 03:06:01.647350 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:01.647357 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:01.647365 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:01.647372 | orchestrator | 2026-03-28 03:06:01.647380 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 03:06:01.647387 | orchestrator | Saturday 28 March 2026 03:05:33 +0000 (0:00:01.997) 0:07:38.332 ******** 2026-03-28 03:06:01.647395 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647402 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647409 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.647417 | orchestrator | 2026-03-28 03:06:01.647424 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 03:06:01.647432 | orchestrator | Saturday 28 March 2026 03:05:33 +0000 (0:00:00.356) 0:07:38.689 ******** 2026-03-28 03:06:01.647440 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647447 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647454 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.647462 | orchestrator | 2026-03-28 03:06:01.647470 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 03:06:01.647477 | orchestrator | Saturday 28 March 2026 03:05:33 +0000 (0:00:00.362) 0:07:39.051 ******** 2026-03-28 03:06:01.647485 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 03:06:01.647505 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-28 03:06:01.647514 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 03:06:01.647521 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-28 03:06:01.647529 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-28 03:06:01.647537 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-28 03:06:01.647545 | orchestrator | 2026-03-28 03:06:01.647553 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 03:06:01.647586 | orchestrator | Saturday 28 March 2026 03:05:34 +0000 (0:00:01.040) 0:07:40.091 ******** 2026-03-28 03:06:01.647595 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 03:06:01.647603 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 03:06:01.647610 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 03:06:01.647618 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 03:06:01.647626 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 03:06:01.647634 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 03:06:01.647641 | orchestrator | 2026-03-28 03:06:01.647649 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 03:06:01.647656 | orchestrator | Saturday 28 March 2026 03:05:37 +0000 (0:00:02.431) 0:07:42.523 ******** 2026-03-28 03:06:01.647664 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 03:06:01.647671 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 03:06:01.647679 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 03:06:01.647686 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 03:06:01.647694 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 03:06:01.647701 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 03:06:01.647709 | orchestrator | 2026-03-28 03:06:01.647717 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 03:06:01.647725 | orchestrator | Saturday 28 March 2026 03:05:40 +0000 (0:00:03.564) 0:07:46.087 ******** 2026-03-28 03:06:01.647732 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647739 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647753 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:06:01.647761 | orchestrator | 2026-03-28 03:06:01.647769 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 03:06:01.647776 | orchestrator | Saturday 28 March 2026 03:05:44 +0000 (0:00:03.092) 0:07:49.180 ******** 2026-03-28 03:06:01.647784 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647791 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647799 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-28 03:06:01.647806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:06:01.647814 | orchestrator | 2026-03-28 03:06:01.647821 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 03:06:01.647829 | orchestrator | Saturday 28 March 2026 03:05:56 +0000 (0:00:12.929) 0:08:02.110 ******** 2026-03-28 03:06:01.647836 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647844 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647851 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.647859 | orchestrator | 2026-03-28 03:06:01.647866 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:06:01.647874 | orchestrator | Saturday 28 March 2026 03:05:58 +0000 (0:00:01.191) 0:08:03.301 ******** 2026-03-28 03:06:01.647881 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647888 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.647896 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.647904 | orchestrator | 2026-03-28 03:06:01.647911 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 03:06:01.647919 | orchestrator | Saturday 28 March 2026 03:05:58 +0000 (0:00:00.620) 0:08:03.921 ******** 2026-03-28 03:06:01.647926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:06:01.647934 | orchestrator | 2026-03-28 03:06:01.647941 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 03:06:01.647949 | orchestrator | Saturday 28 March 2026 03:05:59 +0000 (0:00:00.595) 0:08:04.517 ******** 2026-03-28 03:06:01.647956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:06:01.647963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:06:01.647970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:06:01.647977 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.647985 | orchestrator | 2026-03-28 03:06:01.647993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 03:06:01.648000 | orchestrator | Saturday 28 March 2026 03:05:59 +0000 (0:00:00.414) 0:08:04.931 ******** 2026-03-28 03:06:01.648008 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.648015 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.648022 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.648030 | orchestrator | 2026-03-28 03:06:01.648037 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 03:06:01.648044 | orchestrator | Saturday 28 March 2026 03:06:00 +0000 (0:00:00.336) 0:08:05.268 ******** 2026-03-28 03:06:01.648052 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.648059 | orchestrator | 2026-03-28 03:06:01.648067 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 03:06:01.648074 | orchestrator | Saturday 28 March 2026 03:06:00 +0000 (0:00:00.283) 0:08:05.552 ******** 2026-03-28 03:06:01.648082 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.648089 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:01.648096 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:01.648104 | orchestrator | 2026-03-28 03:06:01.648111 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 03:06:01.648119 | orchestrator | Saturday 28 March 2026 03:06:00 +0000 (0:00:00.594) 0:08:06.146 ******** 2026-03-28 03:06:01.648131 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.648139 | orchestrator | 2026-03-28 03:06:01.648150 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 03:06:01.648157 | orchestrator | Saturday 28 March 2026 03:06:01 +0000 (0:00:00.277) 0:08:06.424 ******** 2026-03-28 03:06:01.648165 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:01.648172 | orchestrator | 2026-03-28 03:06:01.648179 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 03:06:01.648187 | orchestrator | Saturday 28 March 2026 03:06:01 +0000 (0:00:00.250) 0:08:06.674 ******** 2026-03-28 03:06:01.648198 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170054 | orchestrator | 2026-03-28 03:06:22.170156 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 03:06:22.170170 | orchestrator | Saturday 28 March 2026 03:06:01 +0000 (0:00:00.123) 0:08:06.797 ******** 2026-03-28 03:06:22.170178 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170187 | orchestrator | 2026-03-28 03:06:22.170194 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 03:06:22.170202 | orchestrator | Saturday 28 March 2026 03:06:01 +0000 (0:00:00.222) 0:08:07.019 ******** 2026-03-28 03:06:22.170208 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170216 | orchestrator | 2026-03-28 03:06:22.170222 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 03:06:22.170229 | orchestrator | Saturday 28 March 2026 03:06:02 +0000 (0:00:00.253) 0:08:07.273 ******** 2026-03-28 03:06:22.170237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:06:22.170244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:06:22.170251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:06:22.170259 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170266 | orchestrator | 2026-03-28 03:06:22.170273 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 03:06:22.170279 | orchestrator | Saturday 28 March 2026 03:06:02 +0000 (0:00:00.419) 0:08:07.693 ******** 2026-03-28 03:06:22.170286 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170291 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.170297 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.170304 | orchestrator | 2026-03-28 03:06:22.170310 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 03:06:22.170317 | orchestrator | Saturday 28 March 2026 03:06:03 +0000 (0:00:00.606) 0:08:08.300 ******** 2026-03-28 03:06:22.170324 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170330 | orchestrator | 2026-03-28 03:06:22.170337 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 03:06:22.170344 | orchestrator | Saturday 28 March 2026 03:06:03 +0000 (0:00:00.263) 0:08:08.564 ******** 2026-03-28 03:06:22.170350 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170356 | orchestrator | 2026-03-28 03:06:22.170363 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-28 03:06:22.170370 | orchestrator | 2026-03-28 03:06:22.170377 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:06:22.170383 | orchestrator | Saturday 28 March 2026 03:06:04 +0000 (0:00:00.739) 0:08:09.304 ******** 2026-03-28 03:06:22.170390 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:06:22.170399 | orchestrator | 2026-03-28 03:06:22.170405 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:06:22.170412 | orchestrator | Saturday 28 March 2026 03:06:05 +0000 (0:00:01.338) 0:08:10.642 ******** 2026-03-28 03:06:22.170418 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:06:22.170445 | orchestrator | 2026-03-28 03:06:22.170453 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:06:22.170461 | orchestrator | Saturday 28 March 2026 03:06:06 +0000 (0:00:01.387) 0:08:12.029 ******** 2026-03-28 03:06:22.170467 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170474 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.170481 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.170489 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:22.170497 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:22.170503 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:22.170510 | orchestrator | 2026-03-28 03:06:22.170517 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:06:22.170524 | orchestrator | Saturday 28 March 2026 03:06:08 +0000 (0:00:01.324) 0:08:13.354 ******** 2026-03-28 03:06:22.170530 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.170537 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.170563 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.170571 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.170578 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.170588 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.170596 | orchestrator | 2026-03-28 03:06:22.170604 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:06:22.170612 | orchestrator | Saturday 28 March 2026 03:06:08 +0000 (0:00:00.751) 0:08:14.106 ******** 2026-03-28 03:06:22.170619 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.170628 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.170637 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.170644 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.170651 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.170658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.170665 | orchestrator | 2026-03-28 03:06:22.170673 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:06:22.170680 | orchestrator | Saturday 28 March 2026 03:06:09 +0000 (0:00:00.926) 0:08:15.032 ******** 2026-03-28 03:06:22.170687 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.170694 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.170702 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.170711 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.170718 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.170726 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.170734 | orchestrator | 2026-03-28 03:06:22.170756 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:06:22.170765 | orchestrator | Saturday 28 March 2026 03:06:10 +0000 (0:00:00.721) 0:08:15.754 ******** 2026-03-28 03:06:22.170772 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170779 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.170785 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.170792 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:22.170799 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:22.170822 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:22.170828 | orchestrator | 2026-03-28 03:06:22.170835 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:06:22.170841 | orchestrator | Saturday 28 March 2026 03:06:12 +0000 (0:00:02.157) 0:08:17.911 ******** 2026-03-28 03:06:22.170848 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170855 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.170862 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.170869 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.170875 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.170882 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.170888 | orchestrator | 2026-03-28 03:06:22.170894 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:06:22.170901 | orchestrator | Saturday 28 March 2026 03:06:13 +0000 (0:00:00.638) 0:08:18.549 ******** 2026-03-28 03:06:22.170908 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.170923 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.170929 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.170936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.170943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.170950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.170956 | orchestrator | 2026-03-28 03:06:22.170963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:06:22.170970 | orchestrator | Saturday 28 March 2026 03:06:14 +0000 (0:00:00.912) 0:08:19.462 ******** 2026-03-28 03:06:22.170977 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.170983 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.170990 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.170997 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:22.171003 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:22.171010 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:22.171016 | orchestrator | 2026-03-28 03:06:22.171022 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:06:22.171028 | orchestrator | Saturday 28 March 2026 03:06:15 +0000 (0:00:01.089) 0:08:20.552 ******** 2026-03-28 03:06:22.171034 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.171040 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.171046 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.171052 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:22.171057 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:22.171063 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:22.171070 | orchestrator | 2026-03-28 03:06:22.171076 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:06:22.171083 | orchestrator | Saturday 28 March 2026 03:06:16 +0000 (0:00:01.365) 0:08:21.917 ******** 2026-03-28 03:06:22.171089 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.171096 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.171102 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.171109 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171115 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.171122 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.171128 | orchestrator | 2026-03-28 03:06:22.171135 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:06:22.171141 | orchestrator | Saturday 28 March 2026 03:06:17 +0000 (0:00:00.641) 0:08:22.559 ******** 2026-03-28 03:06:22.171148 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.171154 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.171161 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.171167 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:22.171174 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:22.171181 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:22.171187 | orchestrator | 2026-03-28 03:06:22.171194 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:06:22.171200 | orchestrator | Saturday 28 March 2026 03:06:18 +0000 (0:00:00.930) 0:08:23.490 ******** 2026-03-28 03:06:22.171207 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.171213 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.171219 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.171226 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171233 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.171239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.171245 | orchestrator | 2026-03-28 03:06:22.171252 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:06:22.171258 | orchestrator | Saturday 28 March 2026 03:06:18 +0000 (0:00:00.659) 0:08:24.149 ******** 2026-03-28 03:06:22.171265 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.171271 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.171278 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.171285 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171291 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.171305 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.171312 | orchestrator | 2026-03-28 03:06:22.171318 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:06:22.171325 | orchestrator | Saturday 28 March 2026 03:06:19 +0000 (0:00:00.935) 0:08:25.085 ******** 2026-03-28 03:06:22.171331 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:22.171338 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:22.171344 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:22.171351 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171357 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.171364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.171370 | orchestrator | 2026-03-28 03:06:22.171377 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:06:22.171383 | orchestrator | Saturday 28 March 2026 03:06:20 +0000 (0:00:00.648) 0:08:25.734 ******** 2026-03-28 03:06:22.171390 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.171396 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.171403 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.171409 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:22.171423 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:22.171429 | orchestrator | 2026-03-28 03:06:22.171436 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:06:22.171443 | orchestrator | Saturday 28 March 2026 03:06:21 +0000 (0:00:00.956) 0:08:26.690 ******** 2026-03-28 03:06:22.171450 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:22.171457 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:22.171463 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:22.171470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:06:22.171485 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:06:55.793915 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:06:55.794052 | orchestrator | 2026-03-28 03:06:55.794065 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:06:55.794071 | orchestrator | Saturday 28 March 2026 03:06:22 +0000 (0:00:00.630) 0:08:27.320 ******** 2026-03-28 03:06:55.794076 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794080 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794084 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:55.794089 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794094 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:55.794098 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:55.794102 | orchestrator | 2026-03-28 03:06:55.794106 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:06:55.794111 | orchestrator | Saturday 28 March 2026 03:06:23 +0000 (0:00:00.994) 0:08:28.315 ******** 2026-03-28 03:06:55.794115 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794119 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794123 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794127 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794131 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:55.794135 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:55.794139 | orchestrator | 2026-03-28 03:06:55.794143 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:06:55.794147 | orchestrator | Saturday 28 March 2026 03:06:23 +0000 (0:00:00.647) 0:08:28.962 ******** 2026-03-28 03:06:55.794151 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794155 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794159 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794199 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794204 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:55.794208 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:55.794212 | orchestrator | 2026-03-28 03:06:55.794216 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-28 03:06:55.794220 | orchestrator | Saturday 28 March 2026 03:06:25 +0000 (0:00:01.442) 0:08:30.405 ******** 2026-03-28 03:06:55.794239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:06:55.794243 | orchestrator | 2026-03-28 03:06:55.794247 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-28 03:06:55.794251 | orchestrator | Saturday 28 March 2026 03:06:30 +0000 (0:00:05.043) 0:08:35.448 ******** 2026-03-28 03:06:55.794256 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:06:55.794260 | orchestrator | 2026-03-28 03:06:55.794264 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-28 03:06:55.794268 | orchestrator | Saturday 28 March 2026 03:06:32 +0000 (0:00:02.113) 0:08:37.561 ******** 2026-03-28 03:06:55.794272 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:55.794276 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:55.794279 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:55.794283 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794287 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:06:55.794291 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:06:55.794295 | orchestrator | 2026-03-28 03:06:55.794299 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-28 03:06:55.794303 | orchestrator | Saturday 28 March 2026 03:06:33 +0000 (0:00:01.587) 0:08:39.149 ******** 2026-03-28 03:06:55.794307 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:55.794311 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:55.794315 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:55.794319 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:06:55.794323 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:06:55.794326 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:06:55.794330 | orchestrator | 2026-03-28 03:06:55.794334 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-28 03:06:55.794338 | orchestrator | Saturday 28 March 2026 03:06:35 +0000 (0:00:01.346) 0:08:40.495 ******** 2026-03-28 03:06:55.794343 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:06:55.794349 | orchestrator | 2026-03-28 03:06:55.794353 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-28 03:06:55.794357 | orchestrator | Saturday 28 March 2026 03:06:36 +0000 (0:00:01.374) 0:08:41.870 ******** 2026-03-28 03:06:55.794361 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:55.794365 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:55.794369 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:55.794373 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:06:55.794377 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:06:55.794381 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:06:55.794385 | orchestrator | 2026-03-28 03:06:55.794389 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-28 03:06:55.794393 | orchestrator | Saturday 28 March 2026 03:06:38 +0000 (0:00:01.636) 0:08:43.506 ******** 2026-03-28 03:06:55.794397 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:55.794401 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:55.794405 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:55.794409 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:06:55.794413 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:06:55.794416 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:06:55.794420 | orchestrator | 2026-03-28 03:06:55.794424 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-28 03:06:55.794429 | orchestrator | Saturday 28 March 2026 03:06:42 +0000 (0:00:03.967) 0:08:47.473 ******** 2026-03-28 03:06:55.794436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:06:55.794440 | orchestrator | 2026-03-28 03:06:55.794444 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-28 03:06:55.794449 | orchestrator | Saturday 28 March 2026 03:06:43 +0000 (0:00:01.429) 0:08:48.902 ******** 2026-03-28 03:06:55.794457 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794461 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794465 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794469 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794484 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:55.794489 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:55.794494 | orchestrator | 2026-03-28 03:06:55.794499 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-28 03:06:55.794504 | orchestrator | Saturday 28 March 2026 03:06:44 +0000 (0:00:00.673) 0:08:49.576 ******** 2026-03-28 03:06:55.794543 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:06:55.794549 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:06:55.794554 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:06:55.794558 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:06:55.794563 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:06:55.794568 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:06:55.794572 | orchestrator | 2026-03-28 03:06:55.794577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-28 03:06:55.794581 | orchestrator | Saturday 28 March 2026 03:06:47 +0000 (0:00:02.588) 0:08:52.165 ******** 2026-03-28 03:06:55.794586 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794591 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794595 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794600 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:06:55.794605 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:06:55.794609 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:06:55.794614 | orchestrator | 2026-03-28 03:06:55.794618 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-28 03:06:55.794623 | orchestrator | 2026-03-28 03:06:55.794627 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:06:55.794632 | orchestrator | Saturday 28 March 2026 03:06:48 +0000 (0:00:01.223) 0:08:53.388 ******** 2026-03-28 03:06:55.794637 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:06:55.794642 | orchestrator | 2026-03-28 03:06:55.794647 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:06:55.794652 | orchestrator | Saturday 28 March 2026 03:06:48 +0000 (0:00:00.543) 0:08:53.932 ******** 2026-03-28 03:06:55.794656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:06:55.794661 | orchestrator | 2026-03-28 03:06:55.794666 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:06:55.794670 | orchestrator | Saturday 28 March 2026 03:06:49 +0000 (0:00:00.813) 0:08:54.745 ******** 2026-03-28 03:06:55.794674 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794679 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794684 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:55.794688 | orchestrator | 2026-03-28 03:06:55.794693 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:06:55.794697 | orchestrator | Saturday 28 March 2026 03:06:49 +0000 (0:00:00.336) 0:08:55.082 ******** 2026-03-28 03:06:55.794702 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794706 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794711 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794715 | orchestrator | 2026-03-28 03:06:55.794720 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:06:55.794725 | orchestrator | Saturday 28 March 2026 03:06:50 +0000 (0:00:00.729) 0:08:55.812 ******** 2026-03-28 03:06:55.794729 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794733 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794738 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794743 | orchestrator | 2026-03-28 03:06:55.794747 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:06:55.794756 | orchestrator | Saturday 28 March 2026 03:06:51 +0000 (0:00:00.752) 0:08:56.564 ******** 2026-03-28 03:06:55.794760 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794765 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794769 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794774 | orchestrator | 2026-03-28 03:06:55.794778 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:06:55.794783 | orchestrator | Saturday 28 March 2026 03:06:52 +0000 (0:00:01.133) 0:08:57.697 ******** 2026-03-28 03:06:55.794788 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794792 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794797 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:55.794801 | orchestrator | 2026-03-28 03:06:55.794806 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:06:55.794811 | orchestrator | Saturday 28 March 2026 03:06:52 +0000 (0:00:00.397) 0:08:58.095 ******** 2026-03-28 03:06:55.794815 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794819 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794824 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:55.794828 | orchestrator | 2026-03-28 03:06:55.794833 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:06:55.794838 | orchestrator | Saturday 28 March 2026 03:06:53 +0000 (0:00:00.352) 0:08:58.448 ******** 2026-03-28 03:06:55.794843 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794847 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794852 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:06:55.794856 | orchestrator | 2026-03-28 03:06:55.794860 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:06:55.794864 | orchestrator | Saturday 28 March 2026 03:06:53 +0000 (0:00:00.335) 0:08:58.783 ******** 2026-03-28 03:06:55.794868 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794872 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794876 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794880 | orchestrator | 2026-03-28 03:06:55.794884 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:06:55.794891 | orchestrator | Saturday 28 March 2026 03:06:54 +0000 (0:00:01.092) 0:08:59.875 ******** 2026-03-28 03:06:55.794895 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:06:55.794899 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:06:55.794903 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:06:55.794907 | orchestrator | 2026-03-28 03:06:55.794911 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:06:55.794915 | orchestrator | Saturday 28 March 2026 03:06:55 +0000 (0:00:00.735) 0:09:00.610 ******** 2026-03-28 03:06:55.794919 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:06:55.794923 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:06:55.794930 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683054 | orchestrator | 2026-03-28 03:07:31.683183 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:07:31.683198 | orchestrator | Saturday 28 March 2026 03:06:55 +0000 (0:00:00.332) 0:09:00.943 ******** 2026-03-28 03:07:31.683209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.683219 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.683228 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683237 | orchestrator | 2026-03-28 03:07:31.683246 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:07:31.683305 | orchestrator | Saturday 28 March 2026 03:06:56 +0000 (0:00:00.333) 0:09:01.276 ******** 2026-03-28 03:07:31.683318 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:31.683328 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:31.683337 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:31.683345 | orchestrator | 2026-03-28 03:07:31.683354 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:07:31.683363 | orchestrator | Saturday 28 March 2026 03:06:56 +0000 (0:00:00.674) 0:09:01.951 ******** 2026-03-28 03:07:31.683391 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:31.683401 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:31.683411 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:31.683420 | orchestrator | 2026-03-28 03:07:31.683428 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:07:31.683437 | orchestrator | Saturday 28 March 2026 03:06:57 +0000 (0:00:00.355) 0:09:02.307 ******** 2026-03-28 03:07:31.683446 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:31.683455 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:31.683464 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:31.683501 | orchestrator | 2026-03-28 03:07:31.683510 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:07:31.683519 | orchestrator | Saturday 28 March 2026 03:06:57 +0000 (0:00:00.382) 0:09:02.689 ******** 2026-03-28 03:07:31.683528 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.683537 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.683545 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683554 | orchestrator | 2026-03-28 03:07:31.683563 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:07:31.683572 | orchestrator | Saturday 28 March 2026 03:06:57 +0000 (0:00:00.330) 0:09:03.020 ******** 2026-03-28 03:07:31.683580 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.683589 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.683600 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683610 | orchestrator | 2026-03-28 03:07:31.683620 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:07:31.683630 | orchestrator | Saturday 28 March 2026 03:06:58 +0000 (0:00:00.623) 0:09:03.643 ******** 2026-03-28 03:07:31.683640 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.683649 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.683659 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683669 | orchestrator | 2026-03-28 03:07:31.683679 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:07:31.683689 | orchestrator | Saturday 28 March 2026 03:06:58 +0000 (0:00:00.328) 0:09:03.971 ******** 2026-03-28 03:07:31.683699 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:31.683710 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:31.683719 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:31.683729 | orchestrator | 2026-03-28 03:07:31.683739 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:07:31.683749 | orchestrator | Saturday 28 March 2026 03:06:59 +0000 (0:00:00.360) 0:09:04.332 ******** 2026-03-28 03:07:31.683759 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:31.683769 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:31.683779 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:31.683788 | orchestrator | 2026-03-28 03:07:31.683798 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 03:07:31.683808 | orchestrator | Saturday 28 March 2026 03:07:00 +0000 (0:00:00.853) 0:09:05.185 ******** 2026-03-28 03:07:31.683818 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.683828 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.683838 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-28 03:07:31.683849 | orchestrator | 2026-03-28 03:07:31.683859 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-28 03:07:31.683869 | orchestrator | Saturday 28 March 2026 03:07:00 +0000 (0:00:00.472) 0:09:05.658 ******** 2026-03-28 03:07:31.683879 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:07:31.683889 | orchestrator | 2026-03-28 03:07:31.683900 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-28 03:07:31.683909 | orchestrator | Saturday 28 March 2026 03:07:02 +0000 (0:00:02.480) 0:09:08.138 ******** 2026-03-28 03:07:31.683921 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-28 03:07:31.683943 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.683953 | orchestrator | 2026-03-28 03:07:31.683963 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-28 03:07:31.683973 | orchestrator | Saturday 28 March 2026 03:07:03 +0000 (0:00:00.323) 0:09:08.461 ******** 2026-03-28 03:07:31.683998 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:07:31.684032 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:07:31.684043 | orchestrator | 2026-03-28 03:07:31.684053 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-28 03:07:31.684064 | orchestrator | Saturday 28 March 2026 03:07:12 +0000 (0:00:09.065) 0:09:17.526 ******** 2026-03-28 03:07:31.684074 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:07:31.684084 | orchestrator | 2026-03-28 03:07:31.684095 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 03:07:31.684105 | orchestrator | Saturday 28 March 2026 03:07:17 +0000 (0:00:04.738) 0:09:22.265 ******** 2026-03-28 03:07:31.684115 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:31.684126 | orchestrator | 2026-03-28 03:07:31.684137 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 03:07:31.684147 | orchestrator | Saturday 28 March 2026 03:07:17 +0000 (0:00:00.602) 0:09:22.868 ******** 2026-03-28 03:07:31.684156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 03:07:31.684167 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 03:07:31.684177 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 03:07:31.684187 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-28 03:07:31.684197 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-28 03:07:31.684208 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-28 03:07:31.684217 | orchestrator | 2026-03-28 03:07:31.684227 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 03:07:31.684237 | orchestrator | Saturday 28 March 2026 03:07:18 +0000 (0:00:01.042) 0:09:23.910 ******** 2026-03-28 03:07:31.684247 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:07:31.684257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:07:31.684268 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:07:31.684278 | orchestrator | 2026-03-28 03:07:31.684287 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 03:07:31.684297 | orchestrator | Saturday 28 March 2026 03:07:21 +0000 (0:00:02.326) 0:09:26.236 ******** 2026-03-28 03:07:31.684307 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:07:31.684317 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:07:31.684327 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:31.684338 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:07:31.684349 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 03:07:31.684359 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:31.684369 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:07:31.684380 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 03:07:31.684396 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:31.684406 | orchestrator | 2026-03-28 03:07:31.684416 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 03:07:31.684426 | orchestrator | Saturday 28 March 2026 03:07:22 +0000 (0:00:01.567) 0:09:27.804 ******** 2026-03-28 03:07:31.684436 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:31.684446 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:31.684456 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:31.684467 | orchestrator | 2026-03-28 03:07:31.684520 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 03:07:31.684531 | orchestrator | Saturday 28 March 2026 03:07:25 +0000 (0:00:02.761) 0:09:30.566 ******** 2026-03-28 03:07:31.684540 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:31.684551 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:31.684561 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:31.684571 | orchestrator | 2026-03-28 03:07:31.684581 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 03:07:31.684592 | orchestrator | Saturday 28 March 2026 03:07:25 +0000 (0:00:00.326) 0:09:30.893 ******** 2026-03-28 03:07:31.684602 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:31.684612 | orchestrator | 2026-03-28 03:07:31.684623 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 03:07:31.684633 | orchestrator | Saturday 28 March 2026 03:07:26 +0000 (0:00:00.859) 0:09:31.753 ******** 2026-03-28 03:07:31.684643 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:31.684654 | orchestrator | 2026-03-28 03:07:31.684664 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 03:07:31.684674 | orchestrator | Saturday 28 March 2026 03:07:27 +0000 (0:00:00.609) 0:09:32.362 ******** 2026-03-28 03:07:31.684684 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:31.684694 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:31.684704 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:31.684714 | orchestrator | 2026-03-28 03:07:31.684725 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 03:07:31.684740 | orchestrator | Saturday 28 March 2026 03:07:28 +0000 (0:00:01.248) 0:09:33.611 ******** 2026-03-28 03:07:31.684751 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:31.684760 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:31.684771 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:31.684781 | orchestrator | 2026-03-28 03:07:31.684791 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 03:07:31.684801 | orchestrator | Saturday 28 March 2026 03:07:29 +0000 (0:00:01.495) 0:09:35.106 ******** 2026-03-28 03:07:31.684811 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:31.684821 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:31.684832 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:31.684842 | orchestrator | 2026-03-28 03:07:31.684858 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 03:07:53.786390 | orchestrator | Saturday 28 March 2026 03:07:31 +0000 (0:00:01.723) 0:09:36.829 ******** 2026-03-28 03:07:53.786518 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:53.786534 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:53.786546 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:53.786553 | orchestrator | 2026-03-28 03:07:53.786560 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 03:07:53.786567 | orchestrator | Saturday 28 March 2026 03:07:33 +0000 (0:00:01.929) 0:09:38.759 ******** 2026-03-28 03:07:53.786573 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.786579 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.786585 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.786591 | orchestrator | 2026-03-28 03:07:53.786597 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:07:53.786622 | orchestrator | Saturday 28 March 2026 03:07:35 +0000 (0:00:01.560) 0:09:40.320 ******** 2026-03-28 03:07:53.786629 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:53.786634 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:53.786640 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:53.786646 | orchestrator | 2026-03-28 03:07:53.786652 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 03:07:53.786658 | orchestrator | Saturday 28 March 2026 03:07:35 +0000 (0:00:00.665) 0:09:40.986 ******** 2026-03-28 03:07:53.786664 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:53.786670 | orchestrator | 2026-03-28 03:07:53.786676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 03:07:53.786682 | orchestrator | Saturday 28 March 2026 03:07:36 +0000 (0:00:00.844) 0:09:41.830 ******** 2026-03-28 03:07:53.786687 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.786693 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.786699 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.786704 | orchestrator | 2026-03-28 03:07:53.786710 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 03:07:53.786716 | orchestrator | Saturday 28 March 2026 03:07:37 +0000 (0:00:00.347) 0:09:42.178 ******** 2026-03-28 03:07:53.786722 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:07:53.786728 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:07:53.786733 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:07:53.786739 | orchestrator | 2026-03-28 03:07:53.786745 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 03:07:53.786751 | orchestrator | Saturday 28 March 2026 03:07:38 +0000 (0:00:01.284) 0:09:43.463 ******** 2026-03-28 03:07:53.786757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:07:53.786763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:07:53.786769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:07:53.786775 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.786780 | orchestrator | 2026-03-28 03:07:53.786786 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 03:07:53.786792 | orchestrator | Saturday 28 March 2026 03:07:39 +0000 (0:00:00.967) 0:09:44.430 ******** 2026-03-28 03:07:53.786798 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.786804 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.786809 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.786815 | orchestrator | 2026-03-28 03:07:53.786821 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 03:07:53.786831 | orchestrator | 2026-03-28 03:07:53.786840 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 03:07:53.786849 | orchestrator | Saturday 28 March 2026 03:07:40 +0000 (0:00:00.888) 0:09:45.319 ******** 2026-03-28 03:07:53.786860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:53.786871 | orchestrator | 2026-03-28 03:07:53.786880 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 03:07:53.786886 | orchestrator | Saturday 28 March 2026 03:07:40 +0000 (0:00:00.532) 0:09:45.851 ******** 2026-03-28 03:07:53.786892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:07:53.786898 | orchestrator | 2026-03-28 03:07:53.786904 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 03:07:53.786909 | orchestrator | Saturday 28 March 2026 03:07:41 +0000 (0:00:00.848) 0:09:46.699 ******** 2026-03-28 03:07:53.786915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.786921 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.786926 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.786938 | orchestrator | 2026-03-28 03:07:53.786944 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 03:07:53.786949 | orchestrator | Saturday 28 March 2026 03:07:41 +0000 (0:00:00.333) 0:09:47.033 ******** 2026-03-28 03:07:53.786956 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.786963 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.786970 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.786980 | orchestrator | 2026-03-28 03:07:53.786991 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 03:07:53.787002 | orchestrator | Saturday 28 March 2026 03:07:42 +0000 (0:00:00.725) 0:09:47.759 ******** 2026-03-28 03:07:53.787009 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787028 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787036 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787042 | orchestrator | 2026-03-28 03:07:53.787049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 03:07:53.787057 | orchestrator | Saturday 28 March 2026 03:07:43 +0000 (0:00:01.041) 0:09:48.800 ******** 2026-03-28 03:07:53.787064 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787071 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787078 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787085 | orchestrator | 2026-03-28 03:07:53.787093 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 03:07:53.787100 | orchestrator | Saturday 28 March 2026 03:07:44 +0000 (0:00:00.750) 0:09:49.550 ******** 2026-03-28 03:07:53.787119 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787127 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787135 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787142 | orchestrator | 2026-03-28 03:07:53.787149 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 03:07:53.787157 | orchestrator | Saturday 28 March 2026 03:07:44 +0000 (0:00:00.364) 0:09:49.915 ******** 2026-03-28 03:07:53.787164 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787171 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787186 | orchestrator | 2026-03-28 03:07:53.787194 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 03:07:53.787201 | orchestrator | Saturday 28 March 2026 03:07:45 +0000 (0:00:00.306) 0:09:50.221 ******** 2026-03-28 03:07:53.787208 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787215 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787222 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787229 | orchestrator | 2026-03-28 03:07:53.787236 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 03:07:53.787242 | orchestrator | Saturday 28 March 2026 03:07:45 +0000 (0:00:00.655) 0:09:50.877 ******** 2026-03-28 03:07:53.787248 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787255 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787261 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787267 | orchestrator | 2026-03-28 03:07:53.787273 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 03:07:53.787279 | orchestrator | Saturday 28 March 2026 03:07:47 +0000 (0:00:01.746) 0:09:52.623 ******** 2026-03-28 03:07:53.787286 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787292 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787298 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787304 | orchestrator | 2026-03-28 03:07:53.787310 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 03:07:53.787316 | orchestrator | Saturday 28 March 2026 03:07:48 +0000 (0:00:00.711) 0:09:53.335 ******** 2026-03-28 03:07:53.787323 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787329 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787335 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787341 | orchestrator | 2026-03-28 03:07:53.787348 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 03:07:53.787359 | orchestrator | Saturday 28 March 2026 03:07:48 +0000 (0:00:00.309) 0:09:53.645 ******** 2026-03-28 03:07:53.787365 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787371 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787378 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787384 | orchestrator | 2026-03-28 03:07:53.787390 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 03:07:53.787396 | orchestrator | Saturday 28 March 2026 03:07:49 +0000 (0:00:00.665) 0:09:54.310 ******** 2026-03-28 03:07:53.787403 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787409 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787415 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787421 | orchestrator | 2026-03-28 03:07:53.787428 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 03:07:53.787434 | orchestrator | Saturday 28 March 2026 03:07:49 +0000 (0:00:00.371) 0:09:54.681 ******** 2026-03-28 03:07:53.787440 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787478 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787485 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787492 | orchestrator | 2026-03-28 03:07:53.787498 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 03:07:53.787504 | orchestrator | Saturday 28 March 2026 03:07:49 +0000 (0:00:00.367) 0:09:55.049 ******** 2026-03-28 03:07:53.787510 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787516 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787522 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787529 | orchestrator | 2026-03-28 03:07:53.787535 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 03:07:53.787541 | orchestrator | Saturday 28 March 2026 03:07:50 +0000 (0:00:00.350) 0:09:55.399 ******** 2026-03-28 03:07:53.787547 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787554 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787560 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787566 | orchestrator | 2026-03-28 03:07:53.787572 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 03:07:53.787578 | orchestrator | Saturday 28 March 2026 03:07:50 +0000 (0:00:00.686) 0:09:56.086 ******** 2026-03-28 03:07:53.787585 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787591 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787597 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787603 | orchestrator | 2026-03-28 03:07:53.787609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 03:07:53.787616 | orchestrator | Saturday 28 March 2026 03:07:51 +0000 (0:00:00.342) 0:09:56.428 ******** 2026-03-28 03:07:53.787622 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:07:53.787628 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:07:53.787634 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:07:53.787640 | orchestrator | 2026-03-28 03:07:53.787646 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 03:07:53.787653 | orchestrator | Saturday 28 March 2026 03:07:51 +0000 (0:00:00.357) 0:09:56.786 ******** 2026-03-28 03:07:53.787659 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787665 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787671 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787677 | orchestrator | 2026-03-28 03:07:53.787687 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 03:07:53.787694 | orchestrator | Saturday 28 March 2026 03:07:51 +0000 (0:00:00.370) 0:09:57.157 ******** 2026-03-28 03:07:53.787700 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:07:53.787707 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:07:53.787713 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:07:53.787719 | orchestrator | 2026-03-28 03:07:53.787725 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 03:07:53.787731 | orchestrator | Saturday 28 March 2026 03:07:52 +0000 (0:00:00.956) 0:09:58.113 ******** 2026-03-28 03:07:53.787747 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:08:43.633486 | orchestrator | 2026-03-28 03:08:43.633669 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 03:08:43.633704 | orchestrator | Saturday 28 March 2026 03:07:53 +0000 (0:00:00.819) 0:09:58.932 ******** 2026-03-28 03:08:43.633725 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.633746 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:08:43.633767 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:08:43.633779 | orchestrator | 2026-03-28 03:08:43.633791 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 03:08:43.633802 | orchestrator | Saturday 28 March 2026 03:07:56 +0000 (0:00:02.297) 0:10:01.230 ******** 2026-03-28 03:08:43.633813 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:08:43.633825 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 03:08:43.633837 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:08:43.633848 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:08:43.633859 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 03:08:43.633870 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:08:43.633881 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:08:43.633892 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 03:08:43.633903 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:08:43.633913 | orchestrator | 2026-03-28 03:08:43.633924 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 03:08:43.633935 | orchestrator | Saturday 28 March 2026 03:07:57 +0000 (0:00:01.256) 0:10:02.487 ******** 2026-03-28 03:08:43.633946 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:43.633960 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:08:43.633973 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:08:43.633986 | orchestrator | 2026-03-28 03:08:43.634000 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 03:08:43.634013 | orchestrator | Saturday 28 March 2026 03:07:57 +0000 (0:00:00.371) 0:10:02.858 ******** 2026-03-28 03:08:43.634103 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:08:43.634116 | orchestrator | 2026-03-28 03:08:43.634129 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 03:08:43.634143 | orchestrator | Saturday 28 March 2026 03:07:58 +0000 (0:00:00.812) 0:10:03.670 ******** 2026-03-28 03:08:43.634158 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:43.634175 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:43.634188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:43.634200 | orchestrator | 2026-03-28 03:08:43.634213 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 03:08:43.634226 | orchestrator | Saturday 28 March 2026 03:07:59 +0000 (0:00:00.982) 0:10:04.653 ******** 2026-03-28 03:08:43.634238 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634252 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 03:08:43.634265 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634277 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634325 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 03:08:43.634345 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 03:08:43.634365 | orchestrator | 2026-03-28 03:08:43.634422 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 03:08:43.634441 | orchestrator | Saturday 28 March 2026 03:08:04 +0000 (0:00:04.643) 0:10:09.297 ******** 2026-03-28 03:08:43.634458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634476 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:08:43.634494 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634510 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:08:43.634529 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:08:43.634570 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:08:43.634591 | orchestrator | 2026-03-28 03:08:43.634610 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 03:08:43.634630 | orchestrator | Saturday 28 March 2026 03:08:06 +0000 (0:00:02.517) 0:10:11.815 ******** 2026-03-28 03:08:43.634649 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:08:43.634667 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:08:43.634685 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:08:43.634697 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:08:43.634708 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:08:43.634719 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:08:43.634730 | orchestrator | 2026-03-28 03:08:43.634765 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 03:08:43.634777 | orchestrator | Saturday 28 March 2026 03:08:08 +0000 (0:00:01.688) 0:10:13.504 ******** 2026-03-28 03:08:43.634788 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-28 03:08:43.634799 | orchestrator | 2026-03-28 03:08:43.634810 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 03:08:43.634821 | orchestrator | Saturday 28 March 2026 03:08:08 +0000 (0:00:00.264) 0:10:13.768 ******** 2026-03-28 03:08:43.634831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634887 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:43.634898 | orchestrator | 2026-03-28 03:08:43.634908 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 03:08:43.634919 | orchestrator | Saturday 28 March 2026 03:08:09 +0000 (0:00:00.639) 0:10:14.407 ******** 2026-03-28 03:08:43.634930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 03:08:43.634999 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:43.635011 | orchestrator | 2026-03-28 03:08:43.635022 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 03:08:43.635033 | orchestrator | Saturday 28 March 2026 03:08:09 +0000 (0:00:00.649) 0:10:15.057 ******** 2026-03-28 03:08:43.635044 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 03:08:43.635055 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 03:08:43.635066 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 03:08:43.635078 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 03:08:43.635089 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 03:08:43.635100 | orchestrator | 2026-03-28 03:08:43.635111 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 03:08:43.635122 | orchestrator | Saturday 28 March 2026 03:08:41 +0000 (0:00:31.390) 0:10:46.447 ******** 2026-03-28 03:08:43.635133 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:43.635144 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:08:43.635154 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:08:43.635165 | orchestrator | 2026-03-28 03:08:43.635176 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 03:08:43.635187 | orchestrator | Saturday 28 March 2026 03:08:41 +0000 (0:00:00.322) 0:10:46.769 ******** 2026-03-28 03:08:43.635198 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:43.635209 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:08:43.635219 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:08:43.635230 | orchestrator | 2026-03-28 03:08:43.635248 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 03:08:43.635259 | orchestrator | Saturday 28 March 2026 03:08:42 +0000 (0:00:00.610) 0:10:47.380 ******** 2026-03-28 03:08:43.635270 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:08:43.635283 | orchestrator | 2026-03-28 03:08:43.635306 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 03:08:43.635332 | orchestrator | Saturday 28 March 2026 03:08:42 +0000 (0:00:00.575) 0:10:47.955 ******** 2026-03-28 03:08:43.635361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:08:54.635862 | orchestrator | 2026-03-28 03:08:54.636043 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 03:08:54.636064 | orchestrator | Saturday 28 March 2026 03:08:43 +0000 (0:00:00.824) 0:10:48.779 ******** 2026-03-28 03:08:54.636076 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:08:54.636104 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:08:54.636117 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:08:54.636129 | orchestrator | 2026-03-28 03:08:54.636142 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 03:08:54.636154 | orchestrator | Saturday 28 March 2026 03:08:44 +0000 (0:00:01.338) 0:10:50.118 ******** 2026-03-28 03:08:54.636193 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:08:54.636205 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:08:54.636216 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:08:54.636227 | orchestrator | 2026-03-28 03:08:54.636238 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 03:08:54.636249 | orchestrator | Saturday 28 March 2026 03:08:46 +0000 (0:00:01.216) 0:10:51.334 ******** 2026-03-28 03:08:54.636261 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:08:54.636271 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:08:54.636282 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:08:54.636293 | orchestrator | 2026-03-28 03:08:54.636304 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 03:08:54.636316 | orchestrator | Saturday 28 March 2026 03:08:47 +0000 (0:00:01.699) 0:10:53.034 ******** 2026-03-28 03:08:54.636328 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:54.636343 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:54.636357 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 03:08:54.636369 | orchestrator | 2026-03-28 03:08:54.636412 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 03:08:54.636426 | orchestrator | Saturday 28 March 2026 03:08:50 +0000 (0:00:02.673) 0:10:55.707 ******** 2026-03-28 03:08:54.636439 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:54.636451 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:08:54.636463 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:08:54.636476 | orchestrator | 2026-03-28 03:08:54.636489 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 03:08:54.636502 | orchestrator | Saturday 28 March 2026 03:08:50 +0000 (0:00:00.380) 0:10:56.088 ******** 2026-03-28 03:08:54.636514 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:08:54.636528 | orchestrator | 2026-03-28 03:08:54.636541 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 03:08:54.636553 | orchestrator | Saturday 28 March 2026 03:08:51 +0000 (0:00:00.920) 0:10:57.008 ******** 2026-03-28 03:08:54.636566 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:08:54.636580 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:08:54.636593 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:08:54.636604 | orchestrator | 2026-03-28 03:08:54.636617 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 03:08:54.636630 | orchestrator | Saturday 28 March 2026 03:08:52 +0000 (0:00:00.378) 0:10:57.386 ******** 2026-03-28 03:08:54.636643 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:54.636656 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:08:54.636669 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:08:54.636682 | orchestrator | 2026-03-28 03:08:54.636694 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 03:08:54.636707 | orchestrator | Saturday 28 March 2026 03:08:52 +0000 (0:00:00.348) 0:10:57.734 ******** 2026-03-28 03:08:54.636721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:08:54.636735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:08:54.636748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:08:54.636762 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:08:54.636772 | orchestrator | 2026-03-28 03:08:54.636783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 03:08:54.636794 | orchestrator | Saturday 28 March 2026 03:08:53 +0000 (0:00:01.274) 0:10:59.008 ******** 2026-03-28 03:08:54.636806 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:08:54.636817 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:08:54.636836 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:08:54.636847 | orchestrator | 2026-03-28 03:08:54.636858 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:08:54.636869 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-28 03:08:54.636904 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-28 03:08:54.636916 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-28 03:08:54.636927 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-28 03:08:54.636938 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-28 03:08:54.636970 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-28 03:08:54.636982 | orchestrator | 2026-03-28 03:08:54.636993 | orchestrator | 2026-03-28 03:08:54.637004 | orchestrator | 2026-03-28 03:08:54.637015 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:08:54.637026 | orchestrator | Saturday 28 March 2026 03:08:54 +0000 (0:00:00.264) 0:10:59.273 ******** 2026-03-28 03:08:54.637037 | orchestrator | =============================================================================== 2026-03-28 03:08:54.637048 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 61.50s 2026-03-28 03:08:54.637059 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.52s 2026-03-28 03:08:54.637069 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.39s 2026-03-28 03:08:54.637080 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.42s 2026-03-28 03:08:54.637091 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.96s 2026-03-28 03:08:54.637101 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2026-03-28 03:08:54.637112 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.77s 2026-03-28 03:08:54.637123 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.64s 2026-03-28 03:08:54.637133 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.07s 2026-03-28 03:08:54.637144 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.56s 2026-03-28 03:08:54.637155 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.54s 2026-03-28 03:08:54.637165 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.49s 2026-03-28 03:08:54.637176 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 5.04s 2026-03-28 03:08:54.637187 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.74s 2026-03-28 03:08:54.637198 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.64s 2026-03-28 03:08:54.637209 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.97s 2026-03-28 03:08:54.637220 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.80s 2026-03-28 03:08:54.637231 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.56s 2026-03-28 03:08:54.637241 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.21s 2026-03-28 03:08:54.637252 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.09s 2026-03-28 03:08:57.053996 | orchestrator | 2026-03-28 03:08:57 | INFO  | Task bdfae37b-a06a-4cd1-b8a9-572a9a19b81a (ceph-pools) was prepared for execution. 2026-03-28 03:08:57.054196 | orchestrator | 2026-03-28 03:08:57 | INFO  | It takes a moment until task bdfae37b-a06a-4cd1-b8a9-572a9a19b81a (ceph-pools) has been started and output is visible here. 2026-03-28 03:09:11.529734 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 03:09:11.529835 | orchestrator | 2.16.14 2026-03-28 03:09:11.529847 | orchestrator | 2026-03-28 03:09:11.529854 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-28 03:09:11.529861 | orchestrator | 2026-03-28 03:09:11.529867 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 03:09:11.529873 | orchestrator | Saturday 28 March 2026 03:09:01 +0000 (0:00:00.623) 0:00:00.623 ******** 2026-03-28 03:09:11.529879 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:09:11.529885 | orchestrator | 2026-03-28 03:09:11.529891 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 03:09:11.529897 | orchestrator | Saturday 28 March 2026 03:09:02 +0000 (0:00:00.700) 0:00:01.323 ******** 2026-03-28 03:09:11.529903 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.529909 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.529916 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.529922 | orchestrator | 2026-03-28 03:09:11.529928 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 03:09:11.529934 | orchestrator | Saturday 28 March 2026 03:09:02 +0000 (0:00:00.650) 0:00:01.973 ******** 2026-03-28 03:09:11.529941 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.529947 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.529956 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.529964 | orchestrator | 2026-03-28 03:09:11.529970 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 03:09:11.529976 | orchestrator | Saturday 28 March 2026 03:09:03 +0000 (0:00:00.319) 0:00:02.292 ******** 2026-03-28 03:09:11.529982 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.529988 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.529994 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530000 | orchestrator | 2026-03-28 03:09:11.530075 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 03:09:11.530084 | orchestrator | Saturday 28 March 2026 03:09:04 +0000 (0:00:00.845) 0:00:03.138 ******** 2026-03-28 03:09:11.530091 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.530098 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.530104 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530111 | orchestrator | 2026-03-28 03:09:11.530117 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 03:09:11.530125 | orchestrator | Saturday 28 March 2026 03:09:04 +0000 (0:00:00.342) 0:00:03.481 ******** 2026-03-28 03:09:11.530132 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.530138 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.530144 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530151 | orchestrator | 2026-03-28 03:09:11.530157 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 03:09:11.530164 | orchestrator | Saturday 28 March 2026 03:09:04 +0000 (0:00:00.327) 0:00:03.809 ******** 2026-03-28 03:09:11.530171 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.530177 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.530183 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530190 | orchestrator | 2026-03-28 03:09:11.530197 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 03:09:11.530203 | orchestrator | Saturday 28 March 2026 03:09:05 +0000 (0:00:00.338) 0:00:04.147 ******** 2026-03-28 03:09:11.530210 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:11.530218 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:11.530224 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:11.530230 | orchestrator | 2026-03-28 03:09:11.530237 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 03:09:11.530260 | orchestrator | Saturday 28 March 2026 03:09:05 +0000 (0:00:00.538) 0:00:04.685 ******** 2026-03-28 03:09:11.530267 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.530274 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.530280 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530286 | orchestrator | 2026-03-28 03:09:11.530293 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 03:09:11.530299 | orchestrator | Saturday 28 March 2026 03:09:05 +0000 (0:00:00.293) 0:00:04.979 ******** 2026-03-28 03:09:11.530306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 03:09:11.530312 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:09:11.530319 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:09:11.530325 | orchestrator | 2026-03-28 03:09:11.530332 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 03:09:11.530338 | orchestrator | Saturday 28 March 2026 03:09:06 +0000 (0:00:00.679) 0:00:05.658 ******** 2026-03-28 03:09:11.530344 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:11.530373 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:11.530379 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:11.530384 | orchestrator | 2026-03-28 03:09:11.530390 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 03:09:11.530396 | orchestrator | Saturday 28 March 2026 03:09:07 +0000 (0:00:00.450) 0:00:06.108 ******** 2026-03-28 03:09:11.530402 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 03:09:11.530407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:09:11.530413 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:09:11.530418 | orchestrator | 2026-03-28 03:09:11.530423 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 03:09:11.530429 | orchestrator | Saturday 28 March 2026 03:09:09 +0000 (0:00:02.241) 0:00:08.350 ******** 2026-03-28 03:09:11.530436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 03:09:11.530442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 03:09:11.530448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 03:09:11.530454 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:11.530460 | orchestrator | 2026-03-28 03:09:11.530484 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 03:09:11.530491 | orchestrator | Saturday 28 March 2026 03:09:10 +0000 (0:00:00.681) 0:00:09.031 ******** 2026-03-28 03:09:11.530499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530520 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:11.530526 | orchestrator | 2026-03-28 03:09:11.530531 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 03:09:11.530537 | orchestrator | Saturday 28 March 2026 03:09:11 +0000 (0:00:01.118) 0:00:10.150 ******** 2026-03-28 03:09:11.530553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:11.530584 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:11.530590 | orchestrator | 2026-03-28 03:09:11.530597 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 03:09:11.530603 | orchestrator | Saturday 28 March 2026 03:09:11 +0000 (0:00:00.171) 0:00:10.322 ******** 2026-03-28 03:09:11.530611 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a580dbf75b8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 03:09:07.989566', 'end': '2026-03-28 03:09:08.042685', 'delta': '0:00:00.053119', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a580dbf75b8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 03:09:11.530620 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '63c01d28d51e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 03:09:08.549237', 'end': '2026-03-28 03:09:08.594401', 'delta': '0:00:00.045164', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63c01d28d51e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 03:09:11.530634 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '99ef085e2de2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 03:09:09.133387', 'end': '2026-03-28 03:09:09.188001', 'delta': '0:00:00.054614', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['99ef085e2de2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 03:09:18.472841 | orchestrator | 2026-03-28 03:09:18.472956 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 03:09:18.472974 | orchestrator | Saturday 28 March 2026 03:09:11 +0000 (0:00:00.201) 0:00:10.523 ******** 2026-03-28 03:09:18.473011 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:18.473025 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:18.473036 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:18.473047 | orchestrator | 2026-03-28 03:09:18.473058 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 03:09:18.473070 | orchestrator | Saturday 28 March 2026 03:09:11 +0000 (0:00:00.474) 0:00:10.998 ******** 2026-03-28 03:09:18.473082 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 03:09:18.473093 | orchestrator | 2026-03-28 03:09:18.473120 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 03:09:18.473131 | orchestrator | Saturday 28 March 2026 03:09:13 +0000 (0:00:01.668) 0:00:12.667 ******** 2026-03-28 03:09:18.473143 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473154 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473165 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473176 | orchestrator | 2026-03-28 03:09:18.473187 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 03:09:18.473198 | orchestrator | Saturday 28 March 2026 03:09:13 +0000 (0:00:00.327) 0:00:12.995 ******** 2026-03-28 03:09:18.473209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473220 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473231 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473242 | orchestrator | 2026-03-28 03:09:18.473253 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 03:09:18.473264 | orchestrator | Saturday 28 March 2026 03:09:14 +0000 (0:00:00.696) 0:00:13.691 ******** 2026-03-28 03:09:18.473275 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473286 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473297 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473308 | orchestrator | 2026-03-28 03:09:18.473320 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 03:09:18.473331 | orchestrator | Saturday 28 March 2026 03:09:15 +0000 (0:00:00.328) 0:00:14.020 ******** 2026-03-28 03:09:18.473342 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:18.473395 | orchestrator | 2026-03-28 03:09:18.473409 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 03:09:18.473422 | orchestrator | Saturday 28 March 2026 03:09:15 +0000 (0:00:00.142) 0:00:14.162 ******** 2026-03-28 03:09:18.473435 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473447 | orchestrator | 2026-03-28 03:09:18.473458 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 03:09:18.473469 | orchestrator | Saturday 28 March 2026 03:09:15 +0000 (0:00:00.247) 0:00:14.410 ******** 2026-03-28 03:09:18.473480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473491 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473502 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473513 | orchestrator | 2026-03-28 03:09:18.473524 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 03:09:18.473535 | orchestrator | Saturday 28 March 2026 03:09:15 +0000 (0:00:00.297) 0:00:14.708 ******** 2026-03-28 03:09:18.473546 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473557 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473568 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473579 | orchestrator | 2026-03-28 03:09:18.473590 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 03:09:18.473601 | orchestrator | Saturday 28 March 2026 03:09:16 +0000 (0:00:00.524) 0:00:15.232 ******** 2026-03-28 03:09:18.473612 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473623 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473634 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473645 | orchestrator | 2026-03-28 03:09:18.473656 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 03:09:18.473667 | orchestrator | Saturday 28 March 2026 03:09:16 +0000 (0:00:00.370) 0:00:15.603 ******** 2026-03-28 03:09:18.473687 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473698 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473709 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473721 | orchestrator | 2026-03-28 03:09:18.473732 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 03:09:18.473743 | orchestrator | Saturday 28 March 2026 03:09:16 +0000 (0:00:00.343) 0:00:15.947 ******** 2026-03-28 03:09:18.473754 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473764 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473775 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473786 | orchestrator | 2026-03-28 03:09:18.473797 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 03:09:18.473808 | orchestrator | Saturday 28 March 2026 03:09:17 +0000 (0:00:00.327) 0:00:16.274 ******** 2026-03-28 03:09:18.473819 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.473961 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.473978 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.473989 | orchestrator | 2026-03-28 03:09:18.474001 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 03:09:18.474091 | orchestrator | Saturday 28 March 2026 03:09:17 +0000 (0:00:00.559) 0:00:16.834 ******** 2026-03-28 03:09:18.474105 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.474116 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.474127 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.474138 | orchestrator | 2026-03-28 03:09:18.474149 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 03:09:18.474160 | orchestrator | Saturday 28 March 2026 03:09:18 +0000 (0:00:00.358) 0:00:17.193 ******** 2026-03-28 03:09:18.474197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.474324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.510940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.511217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.511233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.511265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.511292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.511304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.511324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626251 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:18.626268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.626428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.626454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.626476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.626490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.626503 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:18.626515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.626560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 03:09:18.943857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.943960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.943988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.944011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.944033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 03:09:18.944056 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:18.944083 | orchestrator | 2026-03-28 03:09:18.944108 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 03:09:18.944131 | orchestrator | Saturday 28 March 2026 03:09:18 +0000 (0:00:00.651) 0:00:17.844 ******** 2026-03-28 03:09:18.944179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.060997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.061169 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.242839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.242978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.243217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:19.369584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369720 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:19.369728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.369765 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493805 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493940 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.493988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.494069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.494100 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.494113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:19.494132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:31.922199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-01-42-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 03:09:31.922304 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922314 | orchestrator | 2026-03-28 03:09:31.922321 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 03:09:31.922328 | orchestrator | Saturday 28 March 2026 03:09:19 +0000 (0:00:00.651) 0:00:18.496 ******** 2026-03-28 03:09:31.922390 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:31.922396 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:31.922402 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:31.922407 | orchestrator | 2026-03-28 03:09:31.922413 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 03:09:31.922419 | orchestrator | Saturday 28 March 2026 03:09:20 +0000 (0:00:00.956) 0:00:19.453 ******** 2026-03-28 03:09:31.922424 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:31.922430 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:31.922435 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:31.922441 | orchestrator | 2026-03-28 03:09:31.922446 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 03:09:31.922452 | orchestrator | Saturday 28 March 2026 03:09:20 +0000 (0:00:00.321) 0:00:19.774 ******** 2026-03-28 03:09:31.922457 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:31.922463 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:31.922468 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:31.922474 | orchestrator | 2026-03-28 03:09:31.922491 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 03:09:31.922497 | orchestrator | Saturday 28 March 2026 03:09:21 +0000 (0:00:00.629) 0:00:20.404 ******** 2026-03-28 03:09:31.922502 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922508 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922513 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922519 | orchestrator | 2026-03-28 03:09:31.922525 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 03:09:31.922530 | orchestrator | Saturday 28 March 2026 03:09:21 +0000 (0:00:00.332) 0:00:20.736 ******** 2026-03-28 03:09:31.922536 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922541 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922552 | orchestrator | 2026-03-28 03:09:31.922558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 03:09:31.922563 | orchestrator | Saturday 28 March 2026 03:09:22 +0000 (0:00:00.830) 0:00:21.567 ******** 2026-03-28 03:09:31.922569 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922580 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922585 | orchestrator | 2026-03-28 03:09:31.922591 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 03:09:31.922596 | orchestrator | Saturday 28 March 2026 03:09:22 +0000 (0:00:00.380) 0:00:21.947 ******** 2026-03-28 03:09:31.922602 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 03:09:31.922608 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 03:09:31.922614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 03:09:31.922619 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 03:09:31.922625 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 03:09:31.922630 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 03:09:31.922636 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 03:09:31.922647 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 03:09:31.922652 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 03:09:31.922658 | orchestrator | 2026-03-28 03:09:31.922664 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 03:09:31.922670 | orchestrator | Saturday 28 March 2026 03:09:24 +0000 (0:00:01.139) 0:00:23.087 ******** 2026-03-28 03:09:31.922675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 03:09:31.922681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 03:09:31.922687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 03:09:31.922692 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 03:09:31.922704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 03:09:31.922709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 03:09:31.922715 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 03:09:31.922726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 03:09:31.922731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 03:09:31.922737 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922742 | orchestrator | 2026-03-28 03:09:31.922748 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 03:09:31.922754 | orchestrator | Saturday 28 March 2026 03:09:24 +0000 (0:00:00.397) 0:00:23.484 ******** 2026-03-28 03:09:31.922771 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:09:31.922778 | orchestrator | 2026-03-28 03:09:31.922785 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 03:09:31.922793 | orchestrator | Saturday 28 March 2026 03:09:25 +0000 (0:00:00.773) 0:00:24.258 ******** 2026-03-28 03:09:31.922799 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922806 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922812 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922819 | orchestrator | 2026-03-28 03:09:31.922825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 03:09:31.922831 | orchestrator | Saturday 28 March 2026 03:09:25 +0000 (0:00:00.331) 0:00:24.589 ******** 2026-03-28 03:09:31.922838 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922845 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922851 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922857 | orchestrator | 2026-03-28 03:09:31.922863 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 03:09:31.922870 | orchestrator | Saturday 28 March 2026 03:09:25 +0000 (0:00:00.328) 0:00:24.918 ******** 2026-03-28 03:09:31.922876 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922883 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:09:31.922889 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:09:31.922896 | orchestrator | 2026-03-28 03:09:31.922902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 03:09:31.922909 | orchestrator | Saturday 28 March 2026 03:09:26 +0000 (0:00:00.582) 0:00:25.500 ******** 2026-03-28 03:09:31.922915 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:31.922922 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:31.922928 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:31.922935 | orchestrator | 2026-03-28 03:09:31.922941 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 03:09:31.922948 | orchestrator | Saturday 28 March 2026 03:09:26 +0000 (0:00:00.428) 0:00:25.929 ******** 2026-03-28 03:09:31.922954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:09:31.922965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:09:31.922972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:09:31.922982 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.922988 | orchestrator | 2026-03-28 03:09:31.922993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 03:09:31.922999 | orchestrator | Saturday 28 March 2026 03:09:27 +0000 (0:00:00.413) 0:00:26.342 ******** 2026-03-28 03:09:31.923005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:09:31.923010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:09:31.923016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:09:31.923022 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.923027 | orchestrator | 2026-03-28 03:09:31.923033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 03:09:31.923038 | orchestrator | Saturday 28 March 2026 03:09:27 +0000 (0:00:00.391) 0:00:26.734 ******** 2026-03-28 03:09:31.923044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 03:09:31.923049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 03:09:31.923055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 03:09:31.923060 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:09:31.923066 | orchestrator | 2026-03-28 03:09:31.923072 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 03:09:31.923077 | orchestrator | Saturday 28 March 2026 03:09:28 +0000 (0:00:00.434) 0:00:27.169 ******** 2026-03-28 03:09:31.923083 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:09:31.923088 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:09:31.923094 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:09:31.923099 | orchestrator | 2026-03-28 03:09:31.923105 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 03:09:31.923110 | orchestrator | Saturday 28 March 2026 03:09:28 +0000 (0:00:00.327) 0:00:27.497 ******** 2026-03-28 03:09:31.923116 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 03:09:31.923121 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 03:09:31.923127 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 03:09:31.923132 | orchestrator | 2026-03-28 03:09:31.923138 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 03:09:31.923144 | orchestrator | Saturday 28 March 2026 03:09:29 +0000 (0:00:00.874) 0:00:28.371 ******** 2026-03-28 03:09:31.923149 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 03:09:31.923155 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:09:31.923160 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:09:31.923166 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 03:09:31.923171 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 03:09:31.923177 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 03:09:31.923182 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 03:09:31.923191 | orchestrator | 2026-03-28 03:09:31.923200 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 03:09:31.923209 | orchestrator | Saturday 28 March 2026 03:09:30 +0000 (0:00:00.857) 0:00:29.228 ******** 2026-03-28 03:09:31.923218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 03:09:31.923233 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 03:11:13.363719 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 03:11:13.363848 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 03:11:13.363889 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 03:11:13.363902 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 03:11:13.363914 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 03:11:13.363925 | orchestrator | 2026-03-28 03:11:13.363938 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-28 03:11:13.363950 | orchestrator | Saturday 28 March 2026 03:09:31 +0000 (0:00:01.688) 0:00:30.917 ******** 2026-03-28 03:11:13.363961 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:11:13.363974 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:11:13.363985 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-28 03:11:13.363996 | orchestrator | 2026-03-28 03:11:13.364007 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-28 03:11:13.364018 | orchestrator | Saturday 28 March 2026 03:09:32 +0000 (0:00:00.592) 0:00:31.509 ******** 2026-03-28 03:11:13.364031 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:11:13.364045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:11:13.364071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:11:13.364083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:11:13.364094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 03:11:13.364105 | orchestrator | 2026-03-28 03:11:13.364116 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-28 03:11:13.364128 | orchestrator | Saturday 28 March 2026 03:10:18 +0000 (0:00:45.770) 0:01:17.280 ******** 2026-03-28 03:11:13.364139 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364150 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364171 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364213 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-28 03:11:13.364272 | orchestrator | 2026-03-28 03:11:13.364295 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-28 03:11:13.364314 | orchestrator | Saturday 28 March 2026 03:10:42 +0000 (0:00:24.613) 0:01:41.893 ******** 2026-03-28 03:11:13.364333 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364365 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364384 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364402 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364420 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364438 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364455 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 03:11:13.364473 | orchestrator | 2026-03-28 03:11:13.364490 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-28 03:11:13.364509 | orchestrator | Saturday 28 March 2026 03:10:55 +0000 (0:00:12.421) 0:01:54.315 ******** 2026-03-28 03:11:13.364527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364570 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364592 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364610 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364630 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364648 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364684 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364702 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364737 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364757 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364776 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364794 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364813 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364833 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 03:11:13.364851 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 03:11:13.364869 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 03:11:13.364886 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-28 03:11:13.364897 | orchestrator | 2026-03-28 03:11:13.364908 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:11:13.364928 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 03:11:13.364940 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 03:11:13.364952 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 03:11:13.364963 | orchestrator | 2026-03-28 03:11:13.364974 | orchestrator | 2026-03-28 03:11:13.364984 | orchestrator | 2026-03-28 03:11:13.364995 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:11:13.365006 | orchestrator | Saturday 28 March 2026 03:11:12 +0000 (0:00:17.647) 0:02:11.962 ******** 2026-03-28 03:11:13.365016 | orchestrator | =============================================================================== 2026-03-28 03:11:13.365037 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.77s 2026-03-28 03:11:13.365048 | orchestrator | generate keys ---------------------------------------------------------- 24.61s 2026-03-28 03:11:13.365058 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.65s 2026-03-28 03:11:13.365069 | orchestrator | get keys from monitors ------------------------------------------------- 12.42s 2026-03-28 03:11:13.365080 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.24s 2026-03-28 03:11:13.365091 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.69s 2026-03-28 03:11:13.365102 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.67s 2026-03-28 03:11:13.365113 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.14s 2026-03-28 03:11:13.365123 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.12s 2026-03-28 03:11:13.365134 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.96s 2026-03-28 03:11:13.365145 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.87s 2026-03-28 03:11:13.365156 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.86s 2026-03-28 03:11:13.365167 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2026-03-28 03:11:13.365177 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.83s 2026-03-28 03:11:13.365188 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.77s 2026-03-28 03:11:13.365199 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.70s 2026-03-28 03:11:13.365209 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.70s 2026-03-28 03:11:13.365220 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.68s 2026-03-28 03:11:13.365231 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-03-28 03:11:13.365270 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.65s 2026-03-28 03:11:15.790969 | orchestrator | 2026-03-28 03:11:15 | INFO  | Task 8c761d7b-ed0c-40b7-9e94-769232c7e387 (copy-ceph-keys) was prepared for execution. 2026-03-28 03:11:15.791082 | orchestrator | 2026-03-28 03:11:15 | INFO  | It takes a moment until task 8c761d7b-ed0c-40b7-9e94-769232c7e387 (copy-ceph-keys) has been started and output is visible here. 2026-03-28 03:11:55.351663 | orchestrator | 2026-03-28 03:11:55.351785 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-28 03:11:55.351803 | orchestrator | 2026-03-28 03:11:55.351815 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-28 03:11:55.351827 | orchestrator | Saturday 28 March 2026 03:11:20 +0000 (0:00:00.180) 0:00:00.180 ******** 2026-03-28 03:11:55.351838 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 03:11:55.351850 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.351861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.351872 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:11:55.351882 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.351893 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 03:11:55.351904 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 03:11:55.351915 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 03:11:55.351950 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 03:11:55.351962 | orchestrator | 2026-03-28 03:11:55.351973 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-28 03:11:55.351984 | orchestrator | Saturday 28 March 2026 03:11:24 +0000 (0:00:04.620) 0:00:04.801 ******** 2026-03-28 03:11:55.351995 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 03:11:55.352021 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352032 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352043 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:11:55.352054 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352065 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 03:11:55.352076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 03:11:55.352086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 03:11:55.352097 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 03:11:55.352108 | orchestrator | 2026-03-28 03:11:55.352119 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-28 03:11:55.352129 | orchestrator | Saturday 28 March 2026 03:11:29 +0000 (0:00:04.521) 0:00:09.322 ******** 2026-03-28 03:11:55.352141 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 03:11:55.352152 | orchestrator | 2026-03-28 03:11:55.352164 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-28 03:11:55.352175 | orchestrator | Saturday 28 March 2026 03:11:30 +0000 (0:00:01.009) 0:00:10.332 ******** 2026-03-28 03:11:55.352185 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-28 03:11:55.352198 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352240 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352263 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:11:55.352278 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352291 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-28 03:11:55.352304 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-28 03:11:55.352316 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-28 03:11:55.352328 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-28 03:11:55.352341 | orchestrator | 2026-03-28 03:11:55.352353 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-28 03:11:55.352366 | orchestrator | Saturday 28 March 2026 03:11:44 +0000 (0:00:14.120) 0:00:24.452 ******** 2026-03-28 03:11:55.352378 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-28 03:11:55.352391 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-28 03:11:55.352404 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 03:11:55.352417 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 03:11:55.352449 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 03:11:55.352473 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 03:11:55.352484 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-28 03:11:55.352495 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-28 03:11:55.352506 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-28 03:11:55.352517 | orchestrator | 2026-03-28 03:11:55.352528 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-28 03:11:55.352539 | orchestrator | Saturday 28 March 2026 03:11:47 +0000 (0:00:03.230) 0:00:27.683 ******** 2026-03-28 03:11:55.352551 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-28 03:11:55.352562 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352573 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352584 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:11:55.352594 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 03:11:55.352605 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-28 03:11:55.352616 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-28 03:11:55.352627 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-28 03:11:55.352637 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-28 03:11:55.352648 | orchestrator | 2026-03-28 03:11:55.352660 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:11:55.352679 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:11:55.352699 | orchestrator | 2026-03-28 03:11:55.352730 | orchestrator | 2026-03-28 03:11:55.352749 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:11:55.352766 | orchestrator | Saturday 28 March 2026 03:11:54 +0000 (0:00:07.342) 0:00:35.025 ******** 2026-03-28 03:11:55.352784 | orchestrator | =============================================================================== 2026-03-28 03:11:55.352802 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.12s 2026-03-28 03:11:55.352817 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.34s 2026-03-28 03:11:55.352835 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.62s 2026-03-28 03:11:55.352852 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.52s 2026-03-28 03:11:55.352869 | orchestrator | Check if target directories exist --------------------------------------- 3.23s 2026-03-28 03:11:55.352887 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2026-03-28 03:12:07.868150 | orchestrator | 2026-03-28 03:12:07 | INFO  | Task 06208dc2-094b-46fb-8ac1-fda10204886a (cephclient) was prepared for execution. 2026-03-28 03:12:07.868276 | orchestrator | 2026-03-28 03:12:07 | INFO  | It takes a moment until task 06208dc2-094b-46fb-8ac1-fda10204886a (cephclient) has been started and output is visible here. 2026-03-28 03:13:11.287517 | orchestrator | 2026-03-28 03:13:11.287599 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-28 03:13:11.287606 | orchestrator | 2026-03-28 03:13:11.287610 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-28 03:13:11.287616 | orchestrator | Saturday 28 March 2026 03:12:12 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-03-28 03:13:11.287621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-28 03:13:11.287642 | orchestrator | 2026-03-28 03:13:11.287646 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-28 03:13:11.287650 | orchestrator | Saturday 28 March 2026 03:12:12 +0000 (0:00:00.248) 0:00:00.510 ******** 2026-03-28 03:13:11.287655 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-28 03:13:11.287660 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-28 03:13:11.287664 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-28 03:13:11.287668 | orchestrator | 2026-03-28 03:13:11.287673 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-28 03:13:11.287677 | orchestrator | Saturday 28 March 2026 03:12:13 +0000 (0:00:01.300) 0:00:01.811 ******** 2026-03-28 03:13:11.287681 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-28 03:13:11.287685 | orchestrator | 2026-03-28 03:13:11.287689 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-28 03:13:11.287693 | orchestrator | Saturday 28 March 2026 03:12:15 +0000 (0:00:01.502) 0:00:03.313 ******** 2026-03-28 03:13:11.287697 | orchestrator | changed: [testbed-manager] 2026-03-28 03:13:11.287701 | orchestrator | 2026-03-28 03:13:11.287706 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-28 03:13:11.287710 | orchestrator | Saturday 28 March 2026 03:12:16 +0000 (0:00:01.024) 0:00:04.338 ******** 2026-03-28 03:13:11.287713 | orchestrator | changed: [testbed-manager] 2026-03-28 03:13:11.287717 | orchestrator | 2026-03-28 03:13:11.287721 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-28 03:13:11.287725 | orchestrator | Saturday 28 March 2026 03:12:17 +0000 (0:00:00.959) 0:00:05.297 ******** 2026-03-28 03:13:11.287729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-28 03:13:11.287733 | orchestrator | ok: [testbed-manager] 2026-03-28 03:13:11.287738 | orchestrator | 2026-03-28 03:13:11.287742 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-28 03:13:11.287746 | orchestrator | Saturday 28 March 2026 03:13:00 +0000 (0:00:43.646) 0:00:48.944 ******** 2026-03-28 03:13:11.287750 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-28 03:13:11.287754 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-28 03:13:11.287757 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-28 03:13:11.287761 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-28 03:13:11.287765 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-28 03:13:11.287769 | orchestrator | 2026-03-28 03:13:11.287774 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-28 03:13:11.287778 | orchestrator | Saturday 28 March 2026 03:13:05 +0000 (0:00:04.269) 0:00:53.213 ******** 2026-03-28 03:13:11.287782 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-28 03:13:11.287785 | orchestrator | 2026-03-28 03:13:11.287789 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-28 03:13:11.287793 | orchestrator | Saturday 28 March 2026 03:13:05 +0000 (0:00:00.468) 0:00:53.682 ******** 2026-03-28 03:13:11.287797 | orchestrator | skipping: [testbed-manager] 2026-03-28 03:13:11.287801 | orchestrator | 2026-03-28 03:13:11.287805 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-28 03:13:11.287809 | orchestrator | Saturday 28 March 2026 03:13:05 +0000 (0:00:00.158) 0:00:53.840 ******** 2026-03-28 03:13:11.287813 | orchestrator | skipping: [testbed-manager] 2026-03-28 03:13:11.287817 | orchestrator | 2026-03-28 03:13:11.287821 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-28 03:13:11.287825 | orchestrator | Saturday 28 March 2026 03:13:06 +0000 (0:00:00.567) 0:00:54.408 ******** 2026-03-28 03:13:11.287839 | orchestrator | changed: [testbed-manager] 2026-03-28 03:13:11.287843 | orchestrator | 2026-03-28 03:13:11.287847 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-28 03:13:11.287858 | orchestrator | Saturday 28 March 2026 03:13:07 +0000 (0:00:01.528) 0:00:55.936 ******** 2026-03-28 03:13:11.287862 | orchestrator | changed: [testbed-manager] 2026-03-28 03:13:11.287866 | orchestrator | 2026-03-28 03:13:11.287870 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-28 03:13:11.287874 | orchestrator | Saturday 28 March 2026 03:13:08 +0000 (0:00:00.720) 0:00:56.657 ******** 2026-03-28 03:13:11.287878 | orchestrator | changed: [testbed-manager] 2026-03-28 03:13:11.287882 | orchestrator | 2026-03-28 03:13:11.287886 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-28 03:13:11.287890 | orchestrator | Saturday 28 March 2026 03:13:09 +0000 (0:00:00.603) 0:00:57.260 ******** 2026-03-28 03:13:11.287894 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-28 03:13:11.287897 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-28 03:13:11.287901 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-28 03:13:11.287905 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-28 03:13:11.287909 | orchestrator | 2026-03-28 03:13:11.287914 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:13:11.287918 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 03:13:11.287922 | orchestrator | 2026-03-28 03:13:11.287926 | orchestrator | 2026-03-28 03:13:11.287940 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:13:11.287944 | orchestrator | Saturday 28 March 2026 03:13:10 +0000 (0:00:01.590) 0:00:58.851 ******** 2026-03-28 03:13:11.287948 | orchestrator | =============================================================================== 2026-03-28 03:13:11.287952 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.65s 2026-03-28 03:13:11.287956 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.27s 2026-03-28 03:13:11.287960 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2026-03-28 03:13:11.287964 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.53s 2026-03-28 03:13:11.287968 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.50s 2026-03-28 03:13:11.287972 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-03-28 03:13:11.287976 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2026-03-28 03:13:11.287980 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-03-28 03:13:11.287984 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2026-03-28 03:13:11.287988 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-03-28 03:13:11.287992 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.57s 2026-03-28 03:13:11.287996 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-03-28 03:13:11.288000 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-28 03:13:11.288004 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-03-28 03:13:15.257836 | orchestrator | 2026-03-28 03:13:15 | INFO  | Task 165c33bb-14b3-4249-8595-7ec13bcae43a (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-28 03:13:15.257917 | orchestrator | 2026-03-28 03:13:15 | INFO  | It takes a moment until task 165c33bb-14b3-4249-8595-7ec13bcae43a (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-28 03:14:39.613584 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 03:14:39.613721 | orchestrator | 2.16.14 2026-03-28 03:14:39.613739 | orchestrator | 2026-03-28 03:14:39.613752 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-28 03:14:39.613762 | orchestrator | 2026-03-28 03:14:39.613772 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-28 03:14:39.613806 | orchestrator | Saturday 28 March 2026 03:13:19 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-03-28 03:14:39.613817 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.613827 | orchestrator | 2026-03-28 03:14:39.613837 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-28 03:14:39.613847 | orchestrator | Saturday 28 March 2026 03:13:21 +0000 (0:00:01.802) 0:00:02.089 ******** 2026-03-28 03:14:39.613857 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.613866 | orchestrator | 2026-03-28 03:14:39.613876 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-28 03:14:39.613886 | orchestrator | Saturday 28 March 2026 03:13:22 +0000 (0:00:01.175) 0:00:03.264 ******** 2026-03-28 03:14:39.613895 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.613905 | orchestrator | 2026-03-28 03:14:39.613933 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-28 03:14:39.613943 | orchestrator | Saturday 28 March 2026 03:13:23 +0000 (0:00:01.119) 0:00:04.384 ******** 2026-03-28 03:14:39.613962 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.613973 | orchestrator | 2026-03-28 03:14:39.613982 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-28 03:14:39.613992 | orchestrator | Saturday 28 March 2026 03:13:25 +0000 (0:00:01.273) 0:00:05.657 ******** 2026-03-28 03:14:39.614001 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.614011 | orchestrator | 2026-03-28 03:14:39.614078 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-28 03:14:39.614090 | orchestrator | Saturday 28 March 2026 03:13:26 +0000 (0:00:01.078) 0:00:06.735 ******** 2026-03-28 03:14:39.614115 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.614159 | orchestrator | 2026-03-28 03:14:39.614174 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-28 03:14:39.614186 | orchestrator | Saturday 28 March 2026 03:13:27 +0000 (0:00:01.107) 0:00:07.843 ******** 2026-03-28 03:14:39.614197 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.614208 | orchestrator | 2026-03-28 03:14:39.614220 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-28 03:14:39.614237 | orchestrator | Saturday 28 March 2026 03:13:29 +0000 (0:00:02.082) 0:00:09.925 ******** 2026-03-28 03:14:39.614254 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.614270 | orchestrator | 2026-03-28 03:14:39.614285 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-28 03:14:39.614303 | orchestrator | Saturday 28 March 2026 03:13:30 +0000 (0:00:01.229) 0:00:11.155 ******** 2026-03-28 03:14:39.614319 | orchestrator | changed: [testbed-manager] 2026-03-28 03:14:39.614337 | orchestrator | 2026-03-28 03:14:39.614355 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-28 03:14:39.614372 | orchestrator | Saturday 28 March 2026 03:14:14 +0000 (0:00:43.839) 0:00:54.994 ******** 2026-03-28 03:14:39.614389 | orchestrator | skipping: [testbed-manager] 2026-03-28 03:14:39.614400 | orchestrator | 2026-03-28 03:14:39.614410 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 03:14:39.614419 | orchestrator | 2026-03-28 03:14:39.614429 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 03:14:39.614438 | orchestrator | Saturday 28 March 2026 03:14:14 +0000 (0:00:00.168) 0:00:55.163 ******** 2026-03-28 03:14:39.614448 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:14:39.614458 | orchestrator | 2026-03-28 03:14:39.614467 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 03:14:39.614477 | orchestrator | 2026-03-28 03:14:39.614486 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 03:14:39.614496 | orchestrator | Saturday 28 March 2026 03:14:26 +0000 (0:00:12.083) 0:01:07.247 ******** 2026-03-28 03:14:39.614506 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:14:39.614515 | orchestrator | 2026-03-28 03:14:39.614525 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 03:14:39.614545 | orchestrator | 2026-03-28 03:14:39.614555 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 03:14:39.614565 | orchestrator | Saturday 28 March 2026 03:14:37 +0000 (0:00:11.221) 0:01:18.468 ******** 2026-03-28 03:14:39.614575 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:14:39.614585 | orchestrator | 2026-03-28 03:14:39.614594 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:14:39.614605 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 03:14:39.614616 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:14:39.614626 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:14:39.614636 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:14:39.614645 | orchestrator | 2026-03-28 03:14:39.614655 | orchestrator | 2026-03-28 03:14:39.614665 | orchestrator | 2026-03-28 03:14:39.614675 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:14:39.614684 | orchestrator | Saturday 28 March 2026 03:14:39 +0000 (0:00:01.269) 0:01:19.738 ******** 2026-03-28 03:14:39.614694 | orchestrator | =============================================================================== 2026-03-28 03:14:39.614704 | orchestrator | Create admin user ------------------------------------------------------ 43.84s 2026-03-28 03:14:39.614732 | orchestrator | Restart ceph manager service ------------------------------------------- 24.58s 2026-03-28 03:14:39.614743 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-03-28 03:14:39.614752 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.80s 2026-03-28 03:14:39.614762 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.27s 2026-03-28 03:14:39.614771 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2026-03-28 03:14:39.614781 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.18s 2026-03-28 03:14:39.614790 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.12s 2026-03-28 03:14:39.614800 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-03-28 03:14:39.614809 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.08s 2026-03-28 03:14:39.614819 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-03-28 03:14:39.964908 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-28 03:14:42.052447 | orchestrator | 2026-03-28 03:14:42 | INFO  | Task f5697259-8696-4976-8656-47ae2c07dea7 (keystone) was prepared for execution. 2026-03-28 03:14:42.052770 | orchestrator | 2026-03-28 03:14:42 | INFO  | It takes a moment until task f5697259-8696-4976-8656-47ae2c07dea7 (keystone) has been started and output is visible here. 2026-03-28 03:14:49.350073 | orchestrator | 2026-03-28 03:14:49.350210 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:14:49.350247 | orchestrator | 2026-03-28 03:14:49.350276 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:14:49.350309 | orchestrator | Saturday 28 March 2026 03:14:46 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-28 03:14:49.350319 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:14:49.350329 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:14:49.350338 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:14:49.350347 | orchestrator | 2026-03-28 03:14:49.350356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:14:49.350364 | orchestrator | Saturday 28 March 2026 03:14:46 +0000 (0:00:00.354) 0:00:00.635 ******** 2026-03-28 03:14:49.350394 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 03:14:49.350403 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 03:14:49.350412 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 03:14:49.350421 | orchestrator | 2026-03-28 03:14:49.350429 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-28 03:14:49.350438 | orchestrator | 2026-03-28 03:14:49.350447 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:14:49.350456 | orchestrator | Saturday 28 March 2026 03:14:47 +0000 (0:00:00.483) 0:00:01.118 ******** 2026-03-28 03:14:49.350465 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:14:49.350474 | orchestrator | 2026-03-28 03:14:49.350483 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-28 03:14:49.350491 | orchestrator | Saturday 28 March 2026 03:14:47 +0000 (0:00:00.620) 0:00:01.738 ******** 2026-03-28 03:14:49.350507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:49.350522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:49.350555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:49.350574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:49.350639 | orchestrator | 2026-03-28 03:14:49.350648 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-28 03:14:49.350664 | orchestrator | Saturday 28 March 2026 03:14:49 +0000 (0:00:01.485) 0:00:03.223 ******** 2026-03-28 03:14:55.281082 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:14:55.281225 | orchestrator | 2026-03-28 03:14:55.281247 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-28 03:14:55.281285 | orchestrator | Saturday 28 March 2026 03:14:49 +0000 (0:00:00.309) 0:00:03.533 ******** 2026-03-28 03:14:55.281296 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:14:55.281305 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:14:55.281314 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:14:55.281322 | orchestrator | 2026-03-28 03:14:55.281331 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-28 03:14:55.281340 | orchestrator | Saturday 28 March 2026 03:14:49 +0000 (0:00:00.347) 0:00:03.880 ******** 2026-03-28 03:14:55.281349 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:14:55.281357 | orchestrator | 2026-03-28 03:14:55.281366 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:14:55.281375 | orchestrator | Saturday 28 March 2026 03:14:50 +0000 (0:00:00.852) 0:00:04.733 ******** 2026-03-28 03:14:55.281397 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:14:55.281416 | orchestrator | 2026-03-28 03:14:55.281425 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-28 03:14:55.281434 | orchestrator | Saturday 28 March 2026 03:14:51 +0000 (0:00:00.569) 0:00:05.302 ******** 2026-03-28 03:14:55.281447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:55.281460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:55.281471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:55.281535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:14:55.281608 | orchestrator | 2026-03-28 03:14:55.281618 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-28 03:14:55.281629 | orchestrator | Saturday 28 March 2026 03:14:54 +0000 (0:00:03.227) 0:00:08.530 ******** 2026-03-28 03:14:55.281649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:56.068032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:56.068230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:56.068253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:14:56.068271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:56.068307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:56.068326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:56.068338 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:14:56.068369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:56.068381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:56.068393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:56.068413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:14:56.068424 | orchestrator | 2026-03-28 03:14:56.068437 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-28 03:14:56.068449 | orchestrator | Saturday 28 March 2026 03:14:55 +0000 (0:00:00.634) 0:00:09.164 ******** 2026-03-28 03:14:56.068461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:56.068479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:56.068500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:59.742354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:14:59.742457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:59.742476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:59.742509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:59.742522 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:14:59.742547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:14:59.742559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:14:59.742586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:14:59.742598 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:14:59.742608 | orchestrator | 2026-03-28 03:14:59.742619 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-28 03:14:59.742630 | orchestrator | Saturday 28 March 2026 03:14:56 +0000 (0:00:00.783) 0:00:09.948 ******** 2026-03-28 03:14:59.742641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:59.742659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:59.742676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:14:59.742696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:04.580775 | orchestrator | 2026-03-28 03:15:04.580798 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-28 03:15:04.580819 | orchestrator | Saturday 28 March 2026 03:14:59 +0000 (0:00:03.672) 0:00:13.621 ******** 2026-03-28 03:15:04.580869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:15:04.580894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:04.580933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:15:04.580956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:04.580977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:15:04.580999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:08.425232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:08.425390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:08.425407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:15:08.425418 | orchestrator | 2026-03-28 03:15:08.425429 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-28 03:15:08.425440 | orchestrator | Saturday 28 March 2026 03:15:04 +0000 (0:00:04.836) 0:00:18.458 ******** 2026-03-28 03:15:08.425449 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:15:08.425458 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:15:08.425467 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:15:08.425476 | orchestrator | 2026-03-28 03:15:08.425485 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-28 03:15:08.425494 | orchestrator | Saturday 28 March 2026 03:15:06 +0000 (0:00:01.538) 0:00:19.996 ******** 2026-03-28 03:15:08.425503 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:08.425512 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:08.425520 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:08.425529 | orchestrator | 2026-03-28 03:15:08.425538 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-28 03:15:08.425547 | orchestrator | Saturday 28 March 2026 03:15:06 +0000 (0:00:00.834) 0:00:20.831 ******** 2026-03-28 03:15:08.425556 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:08.425565 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:08.425574 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:08.425582 | orchestrator | 2026-03-28 03:15:08.425604 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-28 03:15:08.425613 | orchestrator | Saturday 28 March 2026 03:15:07 +0000 (0:00:00.565) 0:00:21.397 ******** 2026-03-28 03:15:08.425622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:08.425631 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:08.425640 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:08.425649 | orchestrator | 2026-03-28 03:15:08.425658 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-28 03:15:08.425667 | orchestrator | Saturday 28 March 2026 03:15:07 +0000 (0:00:00.314) 0:00:21.711 ******** 2026-03-28 03:15:08.425698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:15:08.425720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:08.425732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:15:08.425743 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:08.425755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:15:08.425771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:08.425783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:15:08.425814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:08.425832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-28 03:15:27.152337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 03:15:27.152441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 03:15:27.152455 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:27.152464 | orchestrator | 2026-03-28 03:15:27.152476 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:15:27.152490 | orchestrator | Saturday 28 March 2026 03:15:08 +0000 (0:00:00.591) 0:00:22.302 ******** 2026-03-28 03:15:27.152502 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:27.152513 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:27.152524 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:27.152536 | orchestrator | 2026-03-28 03:15:27.152548 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-28 03:15:27.152561 | orchestrator | Saturday 28 March 2026 03:15:08 +0000 (0:00:00.275) 0:00:22.577 ******** 2026-03-28 03:15:27.152573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 03:15:27.152587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 03:15:27.152624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 03:15:27.152639 | orchestrator | 2026-03-28 03:15:27.152666 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-28 03:15:27.152678 | orchestrator | Saturday 28 March 2026 03:15:10 +0000 (0:00:01.658) 0:00:24.236 ******** 2026-03-28 03:15:27.152686 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:15:27.152693 | orchestrator | 2026-03-28 03:15:27.152701 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-28 03:15:27.152708 | orchestrator | Saturday 28 March 2026 03:15:11 +0000 (0:00:00.906) 0:00:25.142 ******** 2026-03-28 03:15:27.152715 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:15:27.152722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:15:27.152729 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:15:27.152736 | orchestrator | 2026-03-28 03:15:27.152744 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-28 03:15:27.152751 | orchestrator | Saturday 28 March 2026 03:15:11 +0000 (0:00:00.579) 0:00:25.721 ******** 2026-03-28 03:15:27.152758 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:15:27.152765 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:15:27.152772 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:15:27.152779 | orchestrator | 2026-03-28 03:15:27.152787 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-28 03:15:27.152795 | orchestrator | Saturday 28 March 2026 03:15:12 +0000 (0:00:01.059) 0:00:26.781 ******** 2026-03-28 03:15:27.152802 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:15:27.152810 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:15:27.152818 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:15:27.152825 | orchestrator | 2026-03-28 03:15:27.152832 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-28 03:15:27.152839 | orchestrator | Saturday 28 March 2026 03:15:13 +0000 (0:00:00.559) 0:00:27.341 ******** 2026-03-28 03:15:27.152847 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 03:15:27.152856 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 03:15:27.152864 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 03:15:27.152873 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 03:15:27.152881 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 03:15:27.152889 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 03:15:27.152898 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 03:15:27.152907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 03:15:27.152932 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 03:15:27.152941 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 03:15:27.152948 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 03:15:27.152956 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 03:15:27.152963 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 03:15:27.152970 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 03:15:27.152977 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 03:15:27.152985 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:15:27.152999 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:15:27.153007 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:15:27.153014 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:15:27.153021 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:15:27.153029 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:15:27.153036 | orchestrator | 2026-03-28 03:15:27.153043 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-28 03:15:27.153050 | orchestrator | Saturday 28 March 2026 03:15:22 +0000 (0:00:08.771) 0:00:36.112 ******** 2026-03-28 03:15:27.153057 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:15:27.153065 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:15:27.153072 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:15:27.153079 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:15:27.153086 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:15:27.153093 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:15:27.153100 | orchestrator | 2026-03-28 03:15:27.153108 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-28 03:15:27.153154 | orchestrator | Saturday 28 March 2026 03:15:24 +0000 (0:00:02.668) 0:00:38.781 ******** 2026-03-28 03:15:27.153167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:15:27.153184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:17:06.957654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-28 03:17:06.957791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 03:17:06.957902 | orchestrator | 2026-03-28 03:17:06.957914 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:17:06.957925 | orchestrator | Saturday 28 March 2026 03:15:27 +0000 (0:00:02.249) 0:00:41.031 ******** 2026-03-28 03:17:06.957935 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:06.957946 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:17:06.957956 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:17:06.957966 | orchestrator | 2026-03-28 03:17:06.957976 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-28 03:17:06.957986 | orchestrator | Saturday 28 March 2026 03:15:27 +0000 (0:00:00.548) 0:00:41.580 ******** 2026-03-28 03:17:06.957996 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958005 | orchestrator | 2026-03-28 03:17:06.958064 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-28 03:17:06.958076 | orchestrator | Saturday 28 March 2026 03:15:29 +0000 (0:00:02.221) 0:00:43.801 ******** 2026-03-28 03:17:06.958086 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958096 | orchestrator | 2026-03-28 03:17:06.958150 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-28 03:17:06.958168 | orchestrator | Saturday 28 March 2026 03:15:32 +0000 (0:00:02.273) 0:00:46.074 ******** 2026-03-28 03:17:06.958185 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:17:06.958203 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:17:06.958215 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:17:06.958227 | orchestrator | 2026-03-28 03:17:06.958239 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-28 03:17:06.958250 | orchestrator | Saturday 28 March 2026 03:15:33 +0000 (0:00:00.850) 0:00:46.925 ******** 2026-03-28 03:17:06.958262 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:17:06.958273 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:17:06.958284 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:17:06.958295 | orchestrator | 2026-03-28 03:17:06.958307 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-28 03:17:06.958327 | orchestrator | Saturday 28 March 2026 03:15:33 +0000 (0:00:00.341) 0:00:47.267 ******** 2026-03-28 03:17:06.958339 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:06.958351 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:17:06.958363 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:17:06.958374 | orchestrator | 2026-03-28 03:17:06.958386 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-28 03:17:06.958397 | orchestrator | Saturday 28 March 2026 03:15:33 +0000 (0:00:00.619) 0:00:47.886 ******** 2026-03-28 03:17:06.958408 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958420 | orchestrator | 2026-03-28 03:17:06.958431 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-28 03:17:06.958443 | orchestrator | Saturday 28 March 2026 03:15:49 +0000 (0:00:15.034) 0:01:02.920 ******** 2026-03-28 03:17:06.958454 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958465 | orchestrator | 2026-03-28 03:17:06.958477 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 03:17:06.958488 | orchestrator | Saturday 28 March 2026 03:15:59 +0000 (0:00:10.938) 0:01:13.858 ******** 2026-03-28 03:17:06.958509 | orchestrator | 2026-03-28 03:17:06.958520 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 03:17:06.958532 | orchestrator | Saturday 28 March 2026 03:16:00 +0000 (0:00:00.068) 0:01:13.927 ******** 2026-03-28 03:17:06.958544 | orchestrator | 2026-03-28 03:17:06.958555 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 03:17:06.958567 | orchestrator | Saturday 28 March 2026 03:16:00 +0000 (0:00:00.075) 0:01:14.002 ******** 2026-03-28 03:17:06.958577 | orchestrator | 2026-03-28 03:17:06.958587 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-28 03:17:06.958597 | orchestrator | Saturday 28 March 2026 03:16:00 +0000 (0:00:00.073) 0:01:14.076 ******** 2026-03-28 03:17:06.958614 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958635 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:17:06.958657 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:17:06.958673 | orchestrator | 2026-03-28 03:17:06.958688 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-28 03:17:06.958703 | orchestrator | Saturday 28 March 2026 03:16:48 +0000 (0:00:48.754) 0:02:02.830 ******** 2026-03-28 03:17:06.958717 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958733 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:17:06.958748 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:17:06.958763 | orchestrator | 2026-03-28 03:17:06.958780 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-28 03:17:06.958796 | orchestrator | Saturday 28 March 2026 03:16:54 +0000 (0:00:05.480) 0:02:08.310 ******** 2026-03-28 03:17:06.958812 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:06.958828 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:17:06.958844 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:17:06.958861 | orchestrator | 2026-03-28 03:17:06.958877 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:17:06.958894 | orchestrator | Saturday 28 March 2026 03:17:06 +0000 (0:00:11.895) 0:02:20.206 ******** 2026-03-28 03:17:06.958916 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:17:58.296883 | orchestrator | 2026-03-28 03:17:58.296967 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-28 03:17:58.296974 | orchestrator | Saturday 28 March 2026 03:17:06 +0000 (0:00:00.631) 0:02:20.838 ******** 2026-03-28 03:17:58.296979 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:17:58.296985 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:17:58.296990 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:17:58.296994 | orchestrator | 2026-03-28 03:17:58.296998 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-28 03:17:58.297002 | orchestrator | Saturday 28 March 2026 03:17:08 +0000 (0:00:01.172) 0:02:22.011 ******** 2026-03-28 03:17:58.297006 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:17:58.297011 | orchestrator | 2026-03-28 03:17:58.297015 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-28 03:17:58.297019 | orchestrator | Saturday 28 March 2026 03:17:09 +0000 (0:00:01.768) 0:02:23.780 ******** 2026-03-28 03:17:58.297023 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-28 03:17:58.297027 | orchestrator | 2026-03-28 03:17:58.297031 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-28 03:17:58.297035 | orchestrator | Saturday 28 March 2026 03:17:22 +0000 (0:00:12.157) 0:02:35.937 ******** 2026-03-28 03:17:58.297038 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-28 03:17:58.297042 | orchestrator | 2026-03-28 03:17:58.297046 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-28 03:17:58.297050 | orchestrator | Saturday 28 March 2026 03:17:46 +0000 (0:00:24.541) 0:03:00.479 ******** 2026-03-28 03:17:58.297053 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-28 03:17:58.297071 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-28 03:17:58.297075 | orchestrator | 2026-03-28 03:17:58.297079 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-28 03:17:58.297083 | orchestrator | Saturday 28 March 2026 03:17:53 +0000 (0:00:06.562) 0:03:07.042 ******** 2026-03-28 03:17:58.297086 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:58.297090 | orchestrator | 2026-03-28 03:17:58.297111 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-28 03:17:58.297117 | orchestrator | Saturday 28 March 2026 03:17:53 +0000 (0:00:00.135) 0:03:07.177 ******** 2026-03-28 03:17:58.297123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:58.297129 | orchestrator | 2026-03-28 03:17:58.297135 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-28 03:17:58.297142 | orchestrator | Saturday 28 March 2026 03:17:53 +0000 (0:00:00.133) 0:03:07.310 ******** 2026-03-28 03:17:58.297148 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:58.297154 | orchestrator | 2026-03-28 03:17:58.297174 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-28 03:17:58.297179 | orchestrator | Saturday 28 March 2026 03:17:53 +0000 (0:00:00.140) 0:03:07.451 ******** 2026-03-28 03:17:58.297183 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:58.297186 | orchestrator | 2026-03-28 03:17:58.297190 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-28 03:17:58.297194 | orchestrator | Saturday 28 March 2026 03:17:54 +0000 (0:00:00.571) 0:03:08.022 ******** 2026-03-28 03:17:58.297198 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:17:58.297202 | orchestrator | 2026-03-28 03:17:58.297205 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 03:17:58.297209 | orchestrator | Saturday 28 March 2026 03:17:57 +0000 (0:00:03.209) 0:03:11.231 ******** 2026-03-28 03:17:58.297213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:17:58.297217 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:17:58.297220 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:17:58.297224 | orchestrator | 2026-03-28 03:17:58.297228 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:17:58.297233 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 03:17:58.297238 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 03:17:58.297242 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 03:17:58.297245 | orchestrator | 2026-03-28 03:17:58.297249 | orchestrator | 2026-03-28 03:17:58.297253 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:17:58.297257 | orchestrator | Saturday 28 March 2026 03:17:57 +0000 (0:00:00.523) 0:03:11.755 ******** 2026-03-28 03:17:58.297261 | orchestrator | =============================================================================== 2026-03-28 03:17:58.297265 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.75s 2026-03-28 03:17:58.297269 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.54s 2026-03-28 03:17:58.297272 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.03s 2026-03-28 03:17:58.297276 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.16s 2026-03-28 03:17:58.297280 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.90s 2026-03-28 03:17:58.297284 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.94s 2026-03-28 03:17:58.297287 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.77s 2026-03-28 03:17:58.297291 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.56s 2026-03-28 03:17:58.297300 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.48s 2026-03-28 03:17:58.297314 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.84s 2026-03-28 03:17:58.297318 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.67s 2026-03-28 03:17:58.297321 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.23s 2026-03-28 03:17:58.297325 | orchestrator | keystone : Creating default user role ----------------------------------- 3.21s 2026-03-28 03:17:58.297329 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2026-03-28 03:17:58.297333 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.27s 2026-03-28 03:17:58.297336 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2026-03-28 03:17:58.297340 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2026-03-28 03:17:58.297343 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.77s 2026-03-28 03:17:58.297347 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.66s 2026-03-28 03:17:58.297351 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.54s 2026-03-28 03:18:00.719215 | orchestrator | 2026-03-28 03:18:00 | INFO  | Task b759e9a7-137c-4089-96a2-bd06012e4f0c (placement) was prepared for execution. 2026-03-28 03:18:00.719313 | orchestrator | 2026-03-28 03:18:00 | INFO  | It takes a moment until task b759e9a7-137c-4089-96a2-bd06012e4f0c (placement) has been started and output is visible here. 2026-03-28 03:18:36.087932 | orchestrator | 2026-03-28 03:18:36.088072 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:18:36.088134 | orchestrator | 2026-03-28 03:18:36.088153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:18:36.088171 | orchestrator | Saturday 28 March 2026 03:18:04 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-28 03:18:36.088188 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:18:36.088207 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:18:36.088226 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:18:36.088244 | orchestrator | 2026-03-28 03:18:36.088262 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:18:36.088280 | orchestrator | Saturday 28 March 2026 03:18:05 +0000 (0:00:00.309) 0:00:00.581 ******** 2026-03-28 03:18:36.088298 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-28 03:18:36.088316 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-28 03:18:36.088334 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-28 03:18:36.088352 | orchestrator | 2026-03-28 03:18:36.088390 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-28 03:18:36.088408 | orchestrator | 2026-03-28 03:18:36.088426 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 03:18:36.088446 | orchestrator | Saturday 28 March 2026 03:18:05 +0000 (0:00:00.453) 0:00:01.034 ******** 2026-03-28 03:18:36.088467 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:18:36.088488 | orchestrator | 2026-03-28 03:18:36.088507 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-28 03:18:36.088527 | orchestrator | Saturday 28 March 2026 03:18:06 +0000 (0:00:00.534) 0:00:01.568 ******** 2026-03-28 03:18:36.088545 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-28 03:18:36.088562 | orchestrator | 2026-03-28 03:18:36.088580 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-28 03:18:36.088598 | orchestrator | Saturday 28 March 2026 03:18:10 +0000 (0:00:03.895) 0:00:05.464 ******** 2026-03-28 03:18:36.088616 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-28 03:18:36.088664 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-28 03:18:36.088682 | orchestrator | 2026-03-28 03:18:36.088699 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-28 03:18:36.088716 | orchestrator | Saturday 28 March 2026 03:18:17 +0000 (0:00:06.896) 0:00:12.360 ******** 2026-03-28 03:18:36.088732 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-28 03:18:36.088749 | orchestrator | 2026-03-28 03:18:36.088766 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-28 03:18:36.088783 | orchestrator | Saturday 28 March 2026 03:18:20 +0000 (0:00:03.681) 0:00:16.041 ******** 2026-03-28 03:18:36.088801 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:18:36.088818 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-28 03:18:36.088835 | orchestrator | 2026-03-28 03:18:36.088852 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-28 03:18:36.088868 | orchestrator | Saturday 28 March 2026 03:18:24 +0000 (0:00:04.131) 0:00:20.173 ******** 2026-03-28 03:18:36.088884 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:18:36.088902 | orchestrator | 2026-03-28 03:18:36.088918 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-28 03:18:36.088935 | orchestrator | Saturday 28 March 2026 03:18:28 +0000 (0:00:03.261) 0:00:23.435 ******** 2026-03-28 03:18:36.088951 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-28 03:18:36.088968 | orchestrator | 2026-03-28 03:18:36.088984 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 03:18:36.089000 | orchestrator | Saturday 28 March 2026 03:18:31 +0000 (0:00:03.753) 0:00:27.188 ******** 2026-03-28 03:18:36.089011 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:36.089021 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:18:36.089030 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:18:36.089040 | orchestrator | 2026-03-28 03:18:36.089050 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-28 03:18:36.089059 | orchestrator | Saturday 28 March 2026 03:18:32 +0000 (0:00:00.310) 0:00:27.498 ******** 2026-03-28 03:18:36.089074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:36.089157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:36.089183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:36.089194 | orchestrator | 2026-03-28 03:18:36.089204 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-28 03:18:36.089214 | orchestrator | Saturday 28 March 2026 03:18:33 +0000 (0:00:00.869) 0:00:28.368 ******** 2026-03-28 03:18:36.089224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:36.089234 | orchestrator | 2026-03-28 03:18:36.089243 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-28 03:18:36.089253 | orchestrator | Saturday 28 March 2026 03:18:33 +0000 (0:00:00.350) 0:00:28.719 ******** 2026-03-28 03:18:36.089263 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:36.089272 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:18:36.089282 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:18:36.089291 | orchestrator | 2026-03-28 03:18:36.089301 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 03:18:36.089311 | orchestrator | Saturday 28 March 2026 03:18:33 +0000 (0:00:00.320) 0:00:29.040 ******** 2026-03-28 03:18:36.089321 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:18:36.089331 | orchestrator | 2026-03-28 03:18:36.089340 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-28 03:18:36.089350 | orchestrator | Saturday 28 March 2026 03:18:34 +0000 (0:00:00.586) 0:00:29.627 ******** 2026-03-28 03:18:36.089360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:36.089380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:39.005139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:39.005218 | orchestrator | 2026-03-28 03:18:39.005227 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-28 03:18:39.005235 | orchestrator | Saturday 28 March 2026 03:18:36 +0000 (0:00:01.725) 0:00:31.352 ******** 2026-03-28 03:18:39.005242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005249 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:39.005256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005262 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:18:39.005268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005291 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:18:39.005297 | orchestrator | 2026-03-28 03:18:39.005302 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-28 03:18:39.005319 | orchestrator | Saturday 28 March 2026 03:18:36 +0000 (0:00:00.495) 0:00:31.847 ******** 2026-03-28 03:18:39.005330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005336 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:39.005342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:18:39.005354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:39.005360 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:18:39.005366 | orchestrator | 2026-03-28 03:18:39.005371 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-28 03:18:39.005377 | orchestrator | Saturday 28 March 2026 03:18:37 +0000 (0:00:00.763) 0:00:32.611 ******** 2026-03-28 03:18:39.005383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:39.005402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:46.088175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:46.088316 | orchestrator | 2026-03-28 03:18:46.088337 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-28 03:18:46.088350 | orchestrator | Saturday 28 March 2026 03:18:38 +0000 (0:00:01.662) 0:00:34.273 ******** 2026-03-28 03:18:46.088362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:46.088375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:46.088428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:18:46.088442 | orchestrator | 2026-03-28 03:18:46.088453 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-28 03:18:46.088464 | orchestrator | Saturday 28 March 2026 03:18:41 +0000 (0:00:02.435) 0:00:36.709 ******** 2026-03-28 03:18:46.088492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 03:18:46.088506 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 03:18:46.088517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-28 03:18:46.088528 | orchestrator | 2026-03-28 03:18:46.088538 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-28 03:18:46.088549 | orchestrator | Saturday 28 March 2026 03:18:42 +0000 (0:00:01.434) 0:00:38.143 ******** 2026-03-28 03:18:46.088560 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:18:46.088572 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:18:46.088583 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:18:46.088593 | orchestrator | 2026-03-28 03:18:46.088604 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-28 03:18:46.088615 | orchestrator | Saturday 28 March 2026 03:18:44 +0000 (0:00:01.351) 0:00:39.495 ******** 2026-03-28 03:18:46.088629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:46.088643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:18:46.088656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:46.088677 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:18:46.088691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-28 03:18:46.088704 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:18:46.088715 | orchestrator | 2026-03-28 03:18:46.088728 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-28 03:18:46.088747 | orchestrator | Saturday 28 March 2026 03:18:45 +0000 (0:00:00.799) 0:00:40.295 ******** 2026-03-28 03:18:46.088769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:19:15.751309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:19:15.751419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-28 03:19:15.751428 | orchestrator | 2026-03-28 03:19:15.751435 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-28 03:19:15.751442 | orchestrator | Saturday 28 March 2026 03:18:46 +0000 (0:00:01.065) 0:00:41.361 ******** 2026-03-28 03:19:15.751448 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:19:15.751454 | orchestrator | 2026-03-28 03:19:15.751460 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-28 03:19:15.751465 | orchestrator | Saturday 28 March 2026 03:18:48 +0000 (0:00:02.088) 0:00:43.449 ******** 2026-03-28 03:19:15.751470 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:19:15.751476 | orchestrator | 2026-03-28 03:19:15.751481 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-28 03:19:15.751486 | orchestrator | Saturday 28 March 2026 03:18:50 +0000 (0:00:02.163) 0:00:45.612 ******** 2026-03-28 03:19:15.751491 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:19:15.751496 | orchestrator | 2026-03-28 03:19:15.751502 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 03:19:15.751507 | orchestrator | Saturday 28 March 2026 03:19:04 +0000 (0:00:14.429) 0:01:00.042 ******** 2026-03-28 03:19:15.751512 | orchestrator | 2026-03-28 03:19:15.751517 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 03:19:15.751522 | orchestrator | Saturday 28 March 2026 03:19:04 +0000 (0:00:00.071) 0:01:00.114 ******** 2026-03-28 03:19:15.751527 | orchestrator | 2026-03-28 03:19:15.751532 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 03:19:15.751537 | orchestrator | Saturday 28 March 2026 03:19:04 +0000 (0:00:00.089) 0:01:00.204 ******** 2026-03-28 03:19:15.751542 | orchestrator | 2026-03-28 03:19:15.751547 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-28 03:19:15.751552 | orchestrator | Saturday 28 March 2026 03:19:05 +0000 (0:00:00.095) 0:01:00.299 ******** 2026-03-28 03:19:15.751557 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:19:15.751574 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:19:15.751579 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:19:15.751584 | orchestrator | 2026-03-28 03:19:15.751589 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:19:15.751595 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:19:15.751601 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:19:15.751606 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:19:15.751611 | orchestrator | 2026-03-28 03:19:15.751616 | orchestrator | 2026-03-28 03:19:15.751622 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:19:15.751627 | orchestrator | Saturday 28 March 2026 03:19:15 +0000 (0:00:10.336) 0:01:10.635 ******** 2026-03-28 03:19:15.751637 | orchestrator | =============================================================================== 2026-03-28 03:19:15.751642 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.43s 2026-03-28 03:19:15.751658 | orchestrator | placement : Restart placement-api container ---------------------------- 10.34s 2026-03-28 03:19:15.751664 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.90s 2026-03-28 03:19:15.751669 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.13s 2026-03-28 03:19:15.751674 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.90s 2026-03-28 03:19:15.751679 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.75s 2026-03-28 03:19:15.751684 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.68s 2026-03-28 03:19:15.751689 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.26s 2026-03-28 03:19:15.751694 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.44s 2026-03-28 03:19:15.751699 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.16s 2026-03-28 03:19:15.751704 | orchestrator | placement : Creating placement databases -------------------------------- 2.09s 2026-03-28 03:19:15.751710 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.73s 2026-03-28 03:19:15.751715 | orchestrator | placement : Copying over config.json files for services ----------------- 1.66s 2026-03-28 03:19:15.751720 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-03-28 03:19:15.751725 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-03-28 03:19:15.751730 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2026-03-28 03:19:15.751735 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.87s 2026-03-28 03:19:15.751740 | orchestrator | placement : Copying over existing policy file --------------------------- 0.80s 2026-03-28 03:19:15.751745 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.76s 2026-03-28 03:19:15.751750 | orchestrator | placement : include_tasks ----------------------------------------------- 0.59s 2026-03-28 03:19:18.205632 | orchestrator | 2026-03-28 03:19:18 | INFO  | Task 00e378da-e65f-4b4e-8ee4-77f7c57205d3 (neutron) was prepared for execution. 2026-03-28 03:19:18.205738 | orchestrator | 2026-03-28 03:19:18 | INFO  | It takes a moment until task 00e378da-e65f-4b4e-8ee4-77f7c57205d3 (neutron) has been started and output is visible here. 2026-03-28 03:20:07.816598 | orchestrator | 2026-03-28 03:20:07.816725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:20:07.816743 | orchestrator | 2026-03-28 03:20:07.816755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:20:07.816767 | orchestrator | Saturday 28 March 2026 03:19:22 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-28 03:20:07.816779 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:20:07.816791 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:20:07.816803 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:20:07.816814 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:20:07.816825 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:20:07.816836 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:20:07.816847 | orchestrator | 2026-03-28 03:20:07.816859 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:20:07.816870 | orchestrator | Saturday 28 March 2026 03:19:23 +0000 (0:00:00.713) 0:00:00.979 ******** 2026-03-28 03:20:07.816881 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-28 03:20:07.816893 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-28 03:20:07.816904 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-28 03:20:07.816918 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-28 03:20:07.816936 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-28 03:20:07.816987 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-28 03:20:07.817007 | orchestrator | 2026-03-28 03:20:07.817024 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-28 03:20:07.817044 | orchestrator | 2026-03-28 03:20:07.817062 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 03:20:07.817081 | orchestrator | Saturday 28 March 2026 03:19:23 +0000 (0:00:00.634) 0:00:01.614 ******** 2026-03-28 03:20:07.817148 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:20:07.817164 | orchestrator | 2026-03-28 03:20:07.817177 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-28 03:20:07.817191 | orchestrator | Saturday 28 March 2026 03:19:25 +0000 (0:00:01.305) 0:00:02.919 ******** 2026-03-28 03:20:07.817204 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:20:07.817217 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:20:07.817229 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:20:07.817241 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:20:07.817254 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:20:07.817267 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:20:07.817281 | orchestrator | 2026-03-28 03:20:07.817292 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-28 03:20:07.817303 | orchestrator | Saturday 28 March 2026 03:19:26 +0000 (0:00:01.371) 0:00:04.291 ******** 2026-03-28 03:20:07.817314 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:20:07.817325 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:20:07.817336 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:20:07.817347 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:20:07.817357 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:20:07.817368 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:20:07.817379 | orchestrator | 2026-03-28 03:20:07.817390 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-28 03:20:07.817401 | orchestrator | Saturday 28 March 2026 03:19:27 +0000 (0:00:01.101) 0:00:05.392 ******** 2026-03-28 03:20:07.817412 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 03:20:07.817424 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817435 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817446 | orchestrator | } 2026-03-28 03:20:07.817457 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 03:20:07.817468 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817479 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817490 | orchestrator | } 2026-03-28 03:20:07.817501 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 03:20:07.817512 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817522 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817533 | orchestrator | } 2026-03-28 03:20:07.817544 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 03:20:07.817555 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817566 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817576 | orchestrator | } 2026-03-28 03:20:07.817587 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 03:20:07.817598 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817610 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817621 | orchestrator | } 2026-03-28 03:20:07.817632 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 03:20:07.817642 | orchestrator |  "changed": false, 2026-03-28 03:20:07.817654 | orchestrator |  "msg": "All assertions passed" 2026-03-28 03:20:07.817664 | orchestrator | } 2026-03-28 03:20:07.817675 | orchestrator | 2026-03-28 03:20:07.817686 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-28 03:20:07.817697 | orchestrator | Saturday 28 March 2026 03:19:28 +0000 (0:00:00.864) 0:00:06.256 ******** 2026-03-28 03:20:07.817708 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:07.817719 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:07.817730 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:07.817753 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:07.817764 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:07.817774 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:07.817785 | orchestrator | 2026-03-28 03:20:07.817796 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-28 03:20:07.817807 | orchestrator | Saturday 28 March 2026 03:19:29 +0000 (0:00:00.705) 0:00:06.962 ******** 2026-03-28 03:20:07.817818 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-28 03:20:07.817829 | orchestrator | 2026-03-28 03:20:07.817840 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-28 03:20:07.817851 | orchestrator | Saturday 28 March 2026 03:19:33 +0000 (0:00:03.947) 0:00:10.910 ******** 2026-03-28 03:20:07.817862 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-28 03:20:07.817874 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-28 03:20:07.817885 | orchestrator | 2026-03-28 03:20:07.817916 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-28 03:20:07.817928 | orchestrator | Saturday 28 March 2026 03:19:39 +0000 (0:00:06.605) 0:00:17.516 ******** 2026-03-28 03:20:07.817939 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:20:07.817950 | orchestrator | 2026-03-28 03:20:07.817961 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-28 03:20:07.817971 | orchestrator | Saturday 28 March 2026 03:19:42 +0000 (0:00:03.213) 0:00:20.729 ******** 2026-03-28 03:20:07.817982 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:20:07.817993 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-28 03:20:07.818003 | orchestrator | 2026-03-28 03:20:07.818079 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-28 03:20:07.818190 | orchestrator | Saturday 28 March 2026 03:19:47 +0000 (0:00:04.114) 0:00:24.844 ******** 2026-03-28 03:20:07.818210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:20:07.818228 | orchestrator | 2026-03-28 03:20:07.818245 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-28 03:20:07.818262 | orchestrator | Saturday 28 March 2026 03:19:50 +0000 (0:00:03.387) 0:00:28.231 ******** 2026-03-28 03:20:07.818279 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-28 03:20:07.818297 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-28 03:20:07.818315 | orchestrator | 2026-03-28 03:20:07.818334 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 03:20:07.818353 | orchestrator | Saturday 28 March 2026 03:19:58 +0000 (0:00:08.140) 0:00:36.372 ******** 2026-03-28 03:20:07.818371 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:07.818386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:07.818397 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:07.818408 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:07.818419 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:07.818439 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:07.818450 | orchestrator | 2026-03-28 03:20:07.818461 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-28 03:20:07.818472 | orchestrator | Saturday 28 March 2026 03:19:59 +0000 (0:00:00.822) 0:00:37.194 ******** 2026-03-28 03:20:07.818483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:07.818494 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:07.818505 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:07.818515 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:07.818526 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:07.818537 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:07.818547 | orchestrator | 2026-03-28 03:20:07.818558 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-28 03:20:07.818569 | orchestrator | Saturday 28 March 2026 03:20:01 +0000 (0:00:02.275) 0:00:39.470 ******** 2026-03-28 03:20:07.818592 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:20:07.818603 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:20:07.818613 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:20:07.818624 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:20:07.818635 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:20:07.818646 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:20:07.818656 | orchestrator | 2026-03-28 03:20:07.818667 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 03:20:07.818678 | orchestrator | Saturday 28 March 2026 03:20:03 +0000 (0:00:01.324) 0:00:40.795 ******** 2026-03-28 03:20:07.818689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:07.818700 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:07.818711 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:07.818721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:07.818732 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:07.818743 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:07.818753 | orchestrator | 2026-03-28 03:20:07.818764 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-28 03:20:07.818775 | orchestrator | Saturday 28 March 2026 03:20:05 +0000 (0:00:02.332) 0:00:43.127 ******** 2026-03-28 03:20:07.818790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:07.818821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:13.185520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:13.185666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:13.185685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:13.185697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:13.185710 | orchestrator | 2026-03-28 03:20:13.185724 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-28 03:20:13.185737 | orchestrator | Saturday 28 March 2026 03:20:07 +0000 (0:00:02.463) 0:00:45.591 ******** 2026-03-28 03:20:13.185750 | orchestrator | [WARNING]: Skipped 2026-03-28 03:20:13.185764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-28 03:20:13.185776 | orchestrator | due to this access issue: 2026-03-28 03:20:13.185790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-28 03:20:13.185801 | orchestrator | a directory 2026-03-28 03:20:13.185813 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:20:13.185825 | orchestrator | 2026-03-28 03:20:13.185836 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 03:20:13.185848 | orchestrator | Saturday 28 March 2026 03:20:08 +0000 (0:00:00.871) 0:00:46.463 ******** 2026-03-28 03:20:13.185861 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:20:13.185874 | orchestrator | 2026-03-28 03:20:13.185886 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-28 03:20:13.185914 | orchestrator | Saturday 28 March 2026 03:20:09 +0000 (0:00:01.310) 0:00:47.773 ******** 2026-03-28 03:20:13.185931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:13.185953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:13.185966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:13.185978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:13.185997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:18.035663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:18.035776 | orchestrator | 2026-03-28 03:20:18.035793 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-28 03:20:18.035806 | orchestrator | Saturday 28 March 2026 03:20:13 +0000 (0:00:03.181) 0:00:50.955 ******** 2026-03-28 03:20:18.035820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:18.035834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:18.035847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:18.035858 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:18.035870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:18.035881 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:18.035935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:18.035955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:18.035968 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:18.035979 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:18.035991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:18.036002 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:18.036013 | orchestrator | 2026-03-28 03:20:18.036025 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-28 03:20:18.036036 | orchestrator | Saturday 28 March 2026 03:20:15 +0000 (0:00:02.000) 0:00:52.955 ******** 2026-03-28 03:20:18.036048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:18.036060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:18.036078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:23.641576 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:23.641723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:23.641744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:23.641758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:23.641770 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:23.641781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:23.641793 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:23.641805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:23.641839 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:23.641851 | orchestrator | 2026-03-28 03:20:23.641864 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-28 03:20:23.641877 | orchestrator | Saturday 28 March 2026 03:20:18 +0000 (0:00:02.854) 0:00:55.810 ******** 2026-03-28 03:20:23.641887 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:23.641898 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:23.641909 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:23.641920 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:23.641931 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:23.641942 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:23.641952 | orchestrator | 2026-03-28 03:20:23.641963 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-28 03:20:23.641974 | orchestrator | Saturday 28 March 2026 03:20:20 +0000 (0:00:02.396) 0:00:58.207 ******** 2026-03-28 03:20:23.641985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:23.641996 | orchestrator | 2026-03-28 03:20:23.642007 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-28 03:20:23.642209 | orchestrator | Saturday 28 March 2026 03:20:20 +0000 (0:00:00.155) 0:00:58.362 ******** 2026-03-28 03:20:23.642235 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:23.642254 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:23.642273 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:23.642292 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:23.642310 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:23.642329 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:23.642347 | orchestrator | 2026-03-28 03:20:23.642366 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-28 03:20:23.642384 | orchestrator | Saturday 28 March 2026 03:20:21 +0000 (0:00:00.668) 0:00:59.030 ******** 2026-03-28 03:20:23.642416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:23.642437 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:23.642455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:23.642492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:23.642513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:23.642535 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:23.642555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:23.642573 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:23.642615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:32.629689 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:32.629787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:32.629799 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:32.629803 | orchestrator | 2026-03-28 03:20:32.629808 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-28 03:20:32.629814 | orchestrator | Saturday 28 March 2026 03:20:23 +0000 (0:00:02.377) 0:01:01.408 ******** 2026-03-28 03:20:32.629819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:32.629841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:32.629845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:32.629871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:32.629877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:32.629884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:32.629889 | orchestrator | 2026-03-28 03:20:32.629893 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-28 03:20:32.629897 | orchestrator | Saturday 28 March 2026 03:20:27 +0000 (0:00:03.478) 0:01:04.886 ******** 2026-03-28 03:20:32.629901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:32.629905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:32.629916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:37.534397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:37.534527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:37.534544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:20:37.534556 | orchestrator | 2026-03-28 03:20:37.534569 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-28 03:20:37.534580 | orchestrator | Saturday 28 March 2026 03:20:32 +0000 (0:00:05.515) 0:01:10.401 ******** 2026-03-28 03:20:37.534591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:37.534617 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:37.534662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:37.534686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:37.534697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:20:37.534709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:37.534720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:37.534732 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:37.534744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:37.534755 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:37.534773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:37.534781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:37.534787 | orchestrator | 2026-03-28 03:20:37.534795 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-28 03:20:37.534808 | orchestrator | Saturday 28 March 2026 03:20:34 +0000 (0:00:02.166) 0:01:12.567 ******** 2026-03-28 03:20:37.534815 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:37.534821 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:37.534828 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:37.534835 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:20:37.534841 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:20:37.534848 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:20:37.534855 | orchestrator | 2026-03-28 03:20:37.534862 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-28 03:20:37.534874 | orchestrator | Saturday 28 March 2026 03:20:37 +0000 (0:00:02.736) 0:01:15.304 ******** 2026-03-28 03:20:56.794498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:56.794595 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.794606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:56.794613 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.794620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:20:56.794626 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.794634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:56.794683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:56.794696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:20:56.794706 | orchestrator | 2026-03-28 03:20:56.794718 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-28 03:20:56.794729 | orchestrator | Saturday 28 March 2026 03:20:41 +0000 (0:00:03.574) 0:01:18.878 ******** 2026-03-28 03:20:56.794739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.794749 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.794758 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.794768 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.794778 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.794788 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.794799 | orchestrator | 2026-03-28 03:20:56.794806 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-28 03:20:56.794812 | orchestrator | Saturday 28 March 2026 03:20:43 +0000 (0:00:02.177) 0:01:21.056 ******** 2026-03-28 03:20:56.794818 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.794825 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.794831 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.794837 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.794843 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.794849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.794855 | orchestrator | 2026-03-28 03:20:56.794861 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-28 03:20:56.794867 | orchestrator | Saturday 28 March 2026 03:20:45 +0000 (0:00:02.161) 0:01:23.217 ******** 2026-03-28 03:20:56.794873 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.794880 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.794886 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.794892 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.794898 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.794904 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.794910 | orchestrator | 2026-03-28 03:20:56.794916 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-28 03:20:56.794930 | orchestrator | Saturday 28 March 2026 03:20:47 +0000 (0:00:02.209) 0:01:25.427 ******** 2026-03-28 03:20:56.794936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.794942 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.794948 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.794954 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.794960 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.794966 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.794972 | orchestrator | 2026-03-28 03:20:56.794979 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-28 03:20:56.794990 | orchestrator | Saturday 28 March 2026 03:20:49 +0000 (0:00:02.067) 0:01:27.495 ******** 2026-03-28 03:20:56.795005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.795016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.795027 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.795036 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.795046 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.795054 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.795063 | orchestrator | 2026-03-28 03:20:56.795071 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-28 03:20:56.795080 | orchestrator | Saturday 28 March 2026 03:20:51 +0000 (0:00:02.140) 0:01:29.635 ******** 2026-03-28 03:20:56.795123 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.795132 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.795141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.795149 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.795165 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:20:56.795175 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:20:56.795184 | orchestrator | 2026-03-28 03:20:56.795193 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-28 03:20:56.795203 | orchestrator | Saturday 28 March 2026 03:20:54 +0000 (0:00:02.380) 0:01:32.016 ******** 2026-03-28 03:20:56.795212 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:20:56.795222 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:20:56.795231 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:20:56.795241 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:20:56.795250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:20:56.795260 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:20:56.795270 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:20:56.795279 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:20:56.795299 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:21:01.193972 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:01.194158 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 03:21:01.194174 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:01.194183 | orchestrator | 2026-03-28 03:21:01.194192 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-28 03:21:01.194201 | orchestrator | Saturday 28 March 2026 03:20:56 +0000 (0:00:02.550) 0:01:34.567 ******** 2026-03-28 03:21:01.194213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:01.194248 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:01.194258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:01.194266 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:01.194275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:01.194283 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:01.194305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:01.194315 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:01.194339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:01.194356 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:01.194365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:01.194373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:01.194381 | orchestrator | 2026-03-28 03:21:01.194389 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-28 03:21:01.194397 | orchestrator | Saturday 28 March 2026 03:20:58 +0000 (0:00:02.102) 0:01:36.669 ******** 2026-03-28 03:21:01.194405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:01.194414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:01.194426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:01.194435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:01.194450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:28.667169 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.667307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:28.667337 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.667359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:28.667380 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.667401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:28.667421 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.667440 | orchestrator | 2026-03-28 03:21:28.667453 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-28 03:21:28.667467 | orchestrator | Saturday 28 March 2026 03:21:01 +0000 (0:00:02.300) 0:01:38.970 ******** 2026-03-28 03:21:28.667478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.667489 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.667500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.667511 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.667522 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.667534 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.667545 | orchestrator | 2026-03-28 03:21:28.667583 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-28 03:21:28.667605 | orchestrator | Saturday 28 March 2026 03:21:03 +0000 (0:00:02.724) 0:01:41.695 ******** 2026-03-28 03:21:28.667625 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.667645 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.667659 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.667672 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:21:28.667685 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:21:28.667697 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:21:28.667710 | orchestrator | 2026-03-28 03:21:28.667723 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-28 03:21:28.667761 | orchestrator | Saturday 28 March 2026 03:21:07 +0000 (0:00:03.948) 0:01:45.643 ******** 2026-03-28 03:21:28.667772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.667789 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.667807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.667824 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.667841 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.667859 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.667878 | orchestrator | 2026-03-28 03:21:28.667897 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-28 03:21:28.667915 | orchestrator | Saturday 28 March 2026 03:21:10 +0000 (0:00:02.269) 0:01:47.913 ******** 2026-03-28 03:21:28.667934 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.667952 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.667969 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.667986 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.667998 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668008 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668017 | orchestrator | 2026-03-28 03:21:28.668027 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-28 03:21:28.668059 | orchestrator | Saturday 28 March 2026 03:21:12 +0000 (0:00:02.325) 0:01:50.239 ******** 2026-03-28 03:21:28.668076 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668159 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668174 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668189 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.668204 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668218 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668234 | orchestrator | 2026-03-28 03:21:28.668251 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-28 03:21:28.668268 | orchestrator | Saturday 28 March 2026 03:21:14 +0000 (0:00:02.269) 0:01:52.509 ******** 2026-03-28 03:21:28.668284 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668299 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668315 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668330 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.668344 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668357 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668373 | orchestrator | 2026-03-28 03:21:28.668389 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-28 03:21:28.668405 | orchestrator | Saturday 28 March 2026 03:21:16 +0000 (0:00:02.268) 0:01:54.778 ******** 2026-03-28 03:21:28.668422 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668437 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668453 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668468 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668497 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.668512 | orchestrator | 2026-03-28 03:21:28.668527 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-28 03:21:28.668542 | orchestrator | Saturday 28 March 2026 03:21:19 +0000 (0:00:02.221) 0:01:56.999 ******** 2026-03-28 03:21:28.668558 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668573 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668589 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668605 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668621 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.668637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668653 | orchestrator | 2026-03-28 03:21:28.668670 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-28 03:21:28.668685 | orchestrator | Saturday 28 March 2026 03:21:21 +0000 (0:00:02.206) 0:01:59.206 ******** 2026-03-28 03:21:28.668701 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668733 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668748 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668762 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.668778 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.668792 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668806 | orchestrator | 2026-03-28 03:21:28.668820 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-28 03:21:28.668836 | orchestrator | Saturday 28 March 2026 03:21:23 +0000 (0:00:02.436) 0:02:01.642 ******** 2026-03-28 03:21:28.668851 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.668868 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:28.668882 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.668896 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:28.668911 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.668926 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.668943 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:28.668956 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.668972 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.668987 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:28.669002 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 03:21:28.669026 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:28.669040 | orchestrator | 2026-03-28 03:21:28.669053 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-28 03:21:28.669067 | orchestrator | Saturday 28 March 2026 03:21:26 +0000 (0:00:02.176) 0:02:03.819 ******** 2026-03-28 03:21:28.669134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:28.669156 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:21:28.669192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:31.270829 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:21:31.271016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-28 03:21:31.271039 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:21:31.271053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:31.271149 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:21:31.271179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:31.271192 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:21:31.271209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 03:21:31.271235 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:21:31.271256 | orchestrator | 2026-03-28 03:21:31.271275 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-28 03:21:31.271294 | orchestrator | Saturday 28 March 2026 03:21:28 +0000 (0:00:02.612) 0:02:06.431 ******** 2026-03-28 03:21:31.271339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:21:31.271380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:21:31.271401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:21:31.271429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-28 03:21:31.271451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:21:31.271493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 03:23:55.575835 | orchestrator | 2026-03-28 03:23:55.575947 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 03:23:55.575963 | orchestrator | Saturday 28 March 2026 03:21:31 +0000 (0:00:02.612) 0:02:09.044 ******** 2026-03-28 03:23:55.575974 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:23:55.575986 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:23:55.575996 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:23:55.576006 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:23:55.576016 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:23:55.576026 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:23:55.576035 | orchestrator | 2026-03-28 03:23:55.576045 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-28 03:23:55.576055 | orchestrator | Saturday 28 March 2026 03:21:32 +0000 (0:00:00.839) 0:02:09.883 ******** 2026-03-28 03:23:55.576065 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:23:55.576075 | orchestrator | 2026-03-28 03:23:55.576145 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-28 03:23:55.576157 | orchestrator | Saturday 28 March 2026 03:21:34 +0000 (0:00:02.164) 0:02:12.048 ******** 2026-03-28 03:23:55.576166 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:23:55.576176 | orchestrator | 2026-03-28 03:23:55.576186 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-28 03:23:55.576196 | orchestrator | Saturday 28 March 2026 03:21:36 +0000 (0:00:02.154) 0:02:14.202 ******** 2026-03-28 03:23:55.576206 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:23:55.576215 | orchestrator | 2026-03-28 03:23:55.576225 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576236 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:42.905) 0:02:57.108 ******** 2026-03-28 03:23:55.576246 | orchestrator | 2026-03-28 03:23:55.576256 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576266 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.076) 0:02:57.185 ******** 2026-03-28 03:23:55.576275 | orchestrator | 2026-03-28 03:23:55.576285 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576295 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.074) 0:02:57.260 ******** 2026-03-28 03:23:55.576305 | orchestrator | 2026-03-28 03:23:55.576315 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576324 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.079) 0:02:57.340 ******** 2026-03-28 03:23:55.576334 | orchestrator | 2026-03-28 03:23:55.576360 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576370 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.073) 0:02:57.413 ******** 2026-03-28 03:23:55.576381 | orchestrator | 2026-03-28 03:23:55.576393 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 03:23:55.576404 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.069) 0:02:57.483 ******** 2026-03-28 03:23:55.576415 | orchestrator | 2026-03-28 03:23:55.576426 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-28 03:23:55.576438 | orchestrator | Saturday 28 March 2026 03:22:19 +0000 (0:00:00.070) 0:02:57.554 ******** 2026-03-28 03:23:55.576473 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:23:55.576485 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:23:55.576497 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:23:55.576508 | orchestrator | 2026-03-28 03:23:55.576519 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-28 03:23:55.576531 | orchestrator | Saturday 28 March 2026 03:22:49 +0000 (0:00:30.004) 0:03:27.559 ******** 2026-03-28 03:23:55.576542 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:23:55.576552 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:23:55.576561 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:23:55.576571 | orchestrator | 2026-03-28 03:23:55.576581 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:23:55.576592 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 03:23:55.576603 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 03:23:55.576614 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 03:23:55.576623 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 03:23:55.576633 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 03:23:55.576643 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 03:23:55.576653 | orchestrator | 2026-03-28 03:23:55.576662 | orchestrator | 2026-03-28 03:23:55.576672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:23:55.576682 | orchestrator | Saturday 28 March 2026 03:23:55 +0000 (0:01:05.278) 0:04:32.837 ******** 2026-03-28 03:23:55.576692 | orchestrator | =============================================================================== 2026-03-28 03:23:55.576702 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.28s 2026-03-28 03:23:55.576711 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.91s 2026-03-28 03:23:55.576721 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.00s 2026-03-28 03:23:55.576747 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.14s 2026-03-28 03:23:55.576757 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.61s 2026-03-28 03:23:55.576767 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.52s 2026-03-28 03:23:55.576777 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.11s 2026-03-28 03:23:55.576786 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.95s 2026-03-28 03:23:55.576796 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.95s 2026-03-28 03:23:55.576806 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.57s 2026-03-28 03:23:55.576816 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.48s 2026-03-28 03:23:55.576825 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.39s 2026-03-28 03:23:55.576835 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.21s 2026-03-28 03:23:55.576845 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.18s 2026-03-28 03:23:55.576854 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.85s 2026-03-28 03:23:55.576864 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.74s 2026-03-28 03:23:55.576881 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 2.73s 2026-03-28 03:23:55.576891 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.61s 2026-03-28 03:23:55.576901 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.61s 2026-03-28 03:23:55.576911 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 2.55s 2026-03-28 03:23:58.051760 | orchestrator | 2026-03-28 03:23:58 | INFO  | Task c6ce3a9b-0e5c-4009-82bf-94568442de8a (nova) was prepared for execution. 2026-03-28 03:23:58.051857 | orchestrator | 2026-03-28 03:23:58 | INFO  | It takes a moment until task c6ce3a9b-0e5c-4009-82bf-94568442de8a (nova) has been started and output is visible here. 2026-03-28 03:25:57.443993 | orchestrator | 2026-03-28 03:25:57.444188 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:25:57.444207 | orchestrator | 2026-03-28 03:25:57.444216 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-28 03:25:57.444225 | orchestrator | Saturday 28 March 2026 03:24:02 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-28 03:25:57.444233 | orchestrator | changed: [testbed-manager] 2026-03-28 03:25:57.444244 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444252 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:25:57.444261 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:25:57.444269 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:25:57.444277 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:25:57.444286 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:25:57.444294 | orchestrator | 2026-03-28 03:25:57.444321 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:25:57.444330 | orchestrator | Saturday 28 March 2026 03:24:03 +0000 (0:00:00.913) 0:00:01.211 ******** 2026-03-28 03:25:57.444339 | orchestrator | changed: [testbed-manager] 2026-03-28 03:25:57.444348 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444353 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:25:57.444358 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:25:57.444364 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:25:57.444369 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:25:57.444374 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:25:57.444379 | orchestrator | 2026-03-28 03:25:57.444385 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:25:57.444390 | orchestrator | Saturday 28 March 2026 03:24:04 +0000 (0:00:00.883) 0:00:02.095 ******** 2026-03-28 03:25:57.444395 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-28 03:25:57.444401 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-28 03:25:57.444406 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-28 03:25:57.444411 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-28 03:25:57.444416 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-28 03:25:57.444421 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-28 03:25:57.444426 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-28 03:25:57.444431 | orchestrator | 2026-03-28 03:25:57.444437 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-28 03:25:57.444442 | orchestrator | 2026-03-28 03:25:57.444447 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 03:25:57.444452 | orchestrator | Saturday 28 March 2026 03:24:05 +0000 (0:00:00.768) 0:00:02.863 ******** 2026-03-28 03:25:57.444459 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:25:57.444468 | orchestrator | 2026-03-28 03:25:57.444480 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-28 03:25:57.444491 | orchestrator | Saturday 28 March 2026 03:24:05 +0000 (0:00:00.756) 0:00:03.620 ******** 2026-03-28 03:25:57.444500 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-28 03:25:57.444530 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-28 03:25:57.444539 | orchestrator | 2026-03-28 03:25:57.444548 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-28 03:25:57.444555 | orchestrator | Saturday 28 March 2026 03:24:10 +0000 (0:00:04.247) 0:00:07.867 ******** 2026-03-28 03:25:57.444561 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:25:57.444567 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:25:57.444573 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444579 | orchestrator | 2026-03-28 03:25:57.444585 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 03:25:57.444591 | orchestrator | Saturday 28 March 2026 03:24:14 +0000 (0:00:04.208) 0:00:12.076 ******** 2026-03-28 03:25:57.444597 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444604 | orchestrator | 2026-03-28 03:25:57.444610 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-28 03:25:57.444616 | orchestrator | Saturday 28 March 2026 03:24:14 +0000 (0:00:00.639) 0:00:12.716 ******** 2026-03-28 03:25:57.444622 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444628 | orchestrator | 2026-03-28 03:25:57.444634 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-28 03:25:57.444640 | orchestrator | Saturday 28 March 2026 03:24:16 +0000 (0:00:01.332) 0:00:14.049 ******** 2026-03-28 03:25:57.444646 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444652 | orchestrator | 2026-03-28 03:25:57.444658 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 03:25:57.444664 | orchestrator | Saturday 28 March 2026 03:24:18 +0000 (0:00:02.653) 0:00:16.702 ******** 2026-03-28 03:25:57.444670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.444676 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.444682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.444687 | orchestrator | 2026-03-28 03:25:57.444693 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 03:25:57.444699 | orchestrator | Saturday 28 March 2026 03:24:19 +0000 (0:00:00.294) 0:00:16.996 ******** 2026-03-28 03:25:57.444705 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:25:57.444711 | orchestrator | 2026-03-28 03:25:57.444717 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-28 03:25:57.444723 | orchestrator | Saturday 28 March 2026 03:24:51 +0000 (0:00:32.715) 0:00:49.712 ******** 2026-03-28 03:25:57.444729 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444735 | orchestrator | 2026-03-28 03:25:57.444741 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 03:25:57.444747 | orchestrator | Saturday 28 March 2026 03:25:06 +0000 (0:00:14.803) 0:01:04.515 ******** 2026-03-28 03:25:57.444753 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:25:57.444759 | orchestrator | 2026-03-28 03:25:57.444765 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 03:25:57.444771 | orchestrator | Saturday 28 March 2026 03:25:18 +0000 (0:00:12.017) 0:01:16.533 ******** 2026-03-28 03:25:57.444793 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:25:57.444799 | orchestrator | 2026-03-28 03:25:57.444811 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-28 03:25:57.444817 | orchestrator | Saturday 28 March 2026 03:25:19 +0000 (0:00:00.696) 0:01:17.229 ******** 2026-03-28 03:25:57.444823 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.444829 | orchestrator | 2026-03-28 03:25:57.444835 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 03:25:57.444840 | orchestrator | Saturday 28 March 2026 03:25:19 +0000 (0:00:00.482) 0:01:17.711 ******** 2026-03-28 03:25:57.444847 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:25:57.444853 | orchestrator | 2026-03-28 03:25:57.444859 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 03:25:57.444871 | orchestrator | Saturday 28 March 2026 03:25:20 +0000 (0:00:00.723) 0:01:18.435 ******** 2026-03-28 03:25:57.444877 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:25:57.444883 | orchestrator | 2026-03-28 03:25:57.444888 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 03:25:57.444894 | orchestrator | Saturday 28 March 2026 03:25:38 +0000 (0:00:18.068) 0:01:36.503 ******** 2026-03-28 03:25:57.444899 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.444904 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.444910 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.444915 | orchestrator | 2026-03-28 03:25:57.444920 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-28 03:25:57.444925 | orchestrator | 2026-03-28 03:25:57.444930 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 03:25:57.444935 | orchestrator | Saturday 28 March 2026 03:25:39 +0000 (0:00:00.330) 0:01:36.834 ******** 2026-03-28 03:25:57.444940 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:25:57.444945 | orchestrator | 2026-03-28 03:25:57.444951 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-28 03:25:57.444956 | orchestrator | Saturday 28 March 2026 03:25:39 +0000 (0:00:00.786) 0:01:37.621 ******** 2026-03-28 03:25:57.444961 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.444966 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.444971 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.444976 | orchestrator | 2026-03-28 03:25:57.444981 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-28 03:25:57.444986 | orchestrator | Saturday 28 March 2026 03:25:41 +0000 (0:00:01.981) 0:01:39.602 ******** 2026-03-28 03:25:57.444991 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.444996 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445001 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.445006 | orchestrator | 2026-03-28 03:25:57.445011 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 03:25:57.445016 | orchestrator | Saturday 28 March 2026 03:25:43 +0000 (0:00:02.107) 0:01:41.710 ******** 2026-03-28 03:25:57.445021 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.445027 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445032 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445037 | orchestrator | 2026-03-28 03:25:57.445042 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 03:25:57.445047 | orchestrator | Saturday 28 March 2026 03:25:44 +0000 (0:00:00.560) 0:01:42.270 ******** 2026-03-28 03:25:57.445052 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 03:25:57.445057 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445062 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 03:25:57.445067 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445072 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 03:25:57.445077 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-28 03:25:57.445106 | orchestrator | 2026-03-28 03:25:57.445111 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 03:25:57.445117 | orchestrator | Saturday 28 March 2026 03:25:51 +0000 (0:00:07.506) 0:01:49.776 ******** 2026-03-28 03:25:57.445122 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.445127 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445132 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445137 | orchestrator | 2026-03-28 03:25:57.445143 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 03:25:57.445148 | orchestrator | Saturday 28 March 2026 03:25:52 +0000 (0:00:00.341) 0:01:50.118 ******** 2026-03-28 03:25:57.445153 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 03:25:57.445158 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:25:57.445163 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 03:25:57.445173 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445179 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 03:25:57.445184 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445189 | orchestrator | 2026-03-28 03:25:57.445194 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 03:25:57.445199 | orchestrator | Saturday 28 March 2026 03:25:53 +0000 (0:00:01.151) 0:01:51.270 ******** 2026-03-28 03:25:57.445204 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445209 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445215 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.445220 | orchestrator | 2026-03-28 03:25:57.445225 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-28 03:25:57.445230 | orchestrator | Saturday 28 March 2026 03:25:53 +0000 (0:00:00.489) 0:01:51.759 ******** 2026-03-28 03:25:57.445235 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445240 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445246 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:25:57.445251 | orchestrator | 2026-03-28 03:25:57.445256 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-28 03:25:57.445261 | orchestrator | Saturday 28 March 2026 03:25:54 +0000 (0:00:00.975) 0:01:52.734 ******** 2026-03-28 03:25:57.445266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:25:57.445271 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:25:57.445280 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:27:15.710163 | orchestrator | 2026-03-28 03:27:15.710315 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-28 03:27:15.710345 | orchestrator | Saturday 28 March 2026 03:25:57 +0000 (0:00:02.493) 0:01:55.228 ******** 2026-03-28 03:27:15.710367 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710385 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710397 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:27:15.710409 | orchestrator | 2026-03-28 03:27:15.710421 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 03:27:15.710432 | orchestrator | Saturday 28 March 2026 03:26:18 +0000 (0:00:21.349) 0:02:16.578 ******** 2026-03-28 03:27:15.710443 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710454 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710465 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:27:15.710476 | orchestrator | 2026-03-28 03:27:15.710487 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 03:27:15.710498 | orchestrator | Saturday 28 March 2026 03:26:31 +0000 (0:00:12.401) 0:02:28.979 ******** 2026-03-28 03:27:15.710509 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:27:15.710519 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710534 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710553 | orchestrator | 2026-03-28 03:27:15.710572 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-28 03:27:15.710592 | orchestrator | Saturday 28 March 2026 03:26:32 +0000 (0:00:01.126) 0:02:30.105 ******** 2026-03-28 03:27:15.710613 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710632 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710653 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:27:15.710672 | orchestrator | 2026-03-28 03:27:15.710693 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-28 03:27:15.710713 | orchestrator | Saturday 28 March 2026 03:26:44 +0000 (0:00:12.464) 0:02:42.570 ******** 2026-03-28 03:27:15.710731 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:15.710750 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710770 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710788 | orchestrator | 2026-03-28 03:27:15.710806 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 03:27:15.710825 | orchestrator | Saturday 28 March 2026 03:26:45 +0000 (0:00:01.146) 0:02:43.717 ******** 2026-03-28 03:27:15.710879 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:15.710900 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:15.710919 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:15.710939 | orchestrator | 2026-03-28 03:27:15.710957 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-28 03:27:15.710974 | orchestrator | 2026-03-28 03:27:15.710993 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 03:27:15.711010 | orchestrator | Saturday 28 March 2026 03:26:46 +0000 (0:00:00.372) 0:02:44.089 ******** 2026-03-28 03:27:15.711028 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:27:15.711048 | orchestrator | 2026-03-28 03:27:15.711067 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-28 03:27:15.711137 | orchestrator | Saturday 28 March 2026 03:26:47 +0000 (0:00:00.806) 0:02:44.896 ******** 2026-03-28 03:27:15.711150 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-28 03:27:15.711161 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-28 03:27:15.711172 | orchestrator | 2026-03-28 03:27:15.711184 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-28 03:27:15.711194 | orchestrator | Saturday 28 March 2026 03:26:50 +0000 (0:00:03.287) 0:02:48.183 ******** 2026-03-28 03:27:15.711206 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-28 03:27:15.711270 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-28 03:27:15.711283 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-28 03:27:15.711294 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-28 03:27:15.711305 | orchestrator | 2026-03-28 03:27:15.711317 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-28 03:27:15.711328 | orchestrator | Saturday 28 March 2026 03:26:56 +0000 (0:00:06.371) 0:02:54.555 ******** 2026-03-28 03:27:15.711339 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:27:15.711350 | orchestrator | 2026-03-28 03:27:15.711361 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-28 03:27:15.711372 | orchestrator | Saturday 28 March 2026 03:26:59 +0000 (0:00:03.218) 0:02:57.773 ******** 2026-03-28 03:27:15.711383 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:27:15.711393 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-28 03:27:15.711404 | orchestrator | 2026-03-28 03:27:15.711415 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-28 03:27:15.711426 | orchestrator | Saturday 28 March 2026 03:27:03 +0000 (0:00:03.879) 0:03:01.652 ******** 2026-03-28 03:27:15.711437 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:27:15.711448 | orchestrator | 2026-03-28 03:27:15.711459 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-28 03:27:15.711469 | orchestrator | Saturday 28 March 2026 03:27:07 +0000 (0:00:03.196) 0:03:04.849 ******** 2026-03-28 03:27:15.711480 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-28 03:27:15.711491 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-28 03:27:15.711502 | orchestrator | 2026-03-28 03:27:15.711513 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 03:27:15.711552 | orchestrator | Saturday 28 March 2026 03:27:14 +0000 (0:00:07.270) 0:03:12.120 ******** 2026-03-28 03:27:15.711572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:15.711608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:15.711622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:15.711662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.485605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.485699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.485711 | orchestrator | 2026-03-28 03:27:20.485721 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-28 03:27:20.485730 | orchestrator | Saturday 28 March 2026 03:27:15 +0000 (0:00:01.370) 0:03:13.490 ******** 2026-03-28 03:27:20.485739 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:20.485747 | orchestrator | 2026-03-28 03:27:20.485756 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-28 03:27:20.485764 | orchestrator | Saturday 28 March 2026 03:27:15 +0000 (0:00:00.143) 0:03:13.634 ******** 2026-03-28 03:27:20.485772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:20.485780 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:20.485788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:20.485795 | orchestrator | 2026-03-28 03:27:20.485803 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-28 03:27:20.485811 | orchestrator | Saturday 28 March 2026 03:27:16 +0000 (0:00:00.301) 0:03:13.936 ******** 2026-03-28 03:27:20.485819 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:27:20.485827 | orchestrator | 2026-03-28 03:27:20.485835 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-28 03:27:20.485843 | orchestrator | Saturday 28 March 2026 03:27:16 +0000 (0:00:00.724) 0:03:14.660 ******** 2026-03-28 03:27:20.485851 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:20.485859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:20.485867 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:20.485875 | orchestrator | 2026-03-28 03:27:20.485883 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 03:27:20.485891 | orchestrator | Saturday 28 March 2026 03:27:17 +0000 (0:00:00.585) 0:03:15.246 ******** 2026-03-28 03:27:20.485899 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:27:20.485907 | orchestrator | 2026-03-28 03:27:20.485916 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 03:27:20.485924 | orchestrator | Saturday 28 March 2026 03:27:18 +0000 (0:00:00.668) 0:03:15.915 ******** 2026-03-28 03:27:20.485936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:20.485996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:20.486008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:20.486078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.486110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.486132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:20.486143 | orchestrator | 2026-03-28 03:27:20.486159 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 03:27:22.220524 | orchestrator | Saturday 28 March 2026 03:27:20 +0000 (0:00:02.354) 0:03:18.269 ******** 2026-03-28 03:27:22.220615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:22.220629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:22.220638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:22.220647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:22.220678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:22.220700 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:22.220723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:22.220732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:22.220740 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:22.220747 | orchestrator | 2026-03-28 03:27:22.220755 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 03:27:22.220763 | orchestrator | Saturday 28 March 2026 03:27:21 +0000 (0:00:00.887) 0:03:19.156 ******** 2026-03-28 03:27:22.220771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:22.220784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:22.220792 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:22.220808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:24.568293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:24.568401 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:24.568422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:24.568467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:24.568480 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:24.568493 | orchestrator | 2026-03-28 03:27:24.568506 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-28 03:27:24.568519 | orchestrator | Saturday 28 March 2026 03:27:22 +0000 (0:00:00.852) 0:03:20.009 ******** 2026-03-28 03:27:24.568547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:24.568579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:24.568594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:24.568615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:24.568633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:24.568652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:31.242620 | orchestrator | 2026-03-28 03:27:31.242730 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-28 03:27:31.242750 | orchestrator | Saturday 28 March 2026 03:27:24 +0000 (0:00:02.345) 0:03:22.355 ******** 2026-03-28 03:27:31.242778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:31.242821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:31.242879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:31.242967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:31.242984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:31.243005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:31.243017 | orchestrator | 2026-03-28 03:27:31.243029 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-28 03:27:31.243040 | orchestrator | Saturday 28 March 2026 03:27:30 +0000 (0:00:06.074) 0:03:28.429 ******** 2026-03-28 03:27:31.243058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:31.243071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:31.243160 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:31.243197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:35.900213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:35.900287 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:35.900297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-28 03:27:35.900318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:27:35.900327 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:35.900335 | orchestrator | 2026-03-28 03:27:35.900344 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-28 03:27:35.900354 | orchestrator | Saturday 28 March 2026 03:27:31 +0000 (0:00:00.604) 0:03:29.033 ******** 2026-03-28 03:27:35.900361 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:27:35.900375 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:27:35.900383 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:27:35.900391 | orchestrator | 2026-03-28 03:27:35.900398 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-28 03:27:35.900406 | orchestrator | Saturday 28 March 2026 03:27:32 +0000 (0:00:01.754) 0:03:30.788 ******** 2026-03-28 03:27:35.900414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:27:35.900422 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:27:35.900429 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:27:35.900437 | orchestrator | 2026-03-28 03:27:35.900445 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-28 03:27:35.900452 | orchestrator | Saturday 28 March 2026 03:27:33 +0000 (0:00:00.338) 0:03:31.127 ******** 2026-03-28 03:27:35.900476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:35.900498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:35.900510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-28 03:27:35.900515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:35.900527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:27:35.900536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:14.996071 | orchestrator | 2026-03-28 03:28:14.996229 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 03:28:14.996246 | orchestrator | Saturday 28 March 2026 03:27:35 +0000 (0:00:02.090) 0:03:33.217 ******** 2026-03-28 03:28:14.996258 | orchestrator | 2026-03-28 03:28:14.996270 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 03:28:14.996281 | orchestrator | Saturday 28 March 2026 03:27:35 +0000 (0:00:00.146) 0:03:33.364 ******** 2026-03-28 03:28:14.996292 | orchestrator | 2026-03-28 03:28:14.996303 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 03:28:14.996314 | orchestrator | Saturday 28 March 2026 03:27:35 +0000 (0:00:00.163) 0:03:33.527 ******** 2026-03-28 03:28:14.996325 | orchestrator | 2026-03-28 03:28:14.996336 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-28 03:28:14.996347 | orchestrator | Saturday 28 March 2026 03:27:35 +0000 (0:00:00.157) 0:03:33.685 ******** 2026-03-28 03:28:14.996358 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:28:14.996370 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:28:14.996381 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:28:14.996391 | orchestrator | 2026-03-28 03:28:14.996403 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-28 03:28:14.996414 | orchestrator | Saturday 28 March 2026 03:27:54 +0000 (0:00:18.845) 0:03:52.530 ******** 2026-03-28 03:28:14.996425 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:28:14.996436 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:28:14.996447 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:28:14.996457 | orchestrator | 2026-03-28 03:28:14.996468 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-28 03:28:14.996479 | orchestrator | 2026-03-28 03:28:14.996490 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 03:28:14.996501 | orchestrator | Saturday 28 March 2026 03:28:03 +0000 (0:00:08.686) 0:04:01.217 ******** 2026-03-28 03:28:14.996514 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:28:14.996526 | orchestrator | 2026-03-28 03:28:14.996537 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 03:28:14.996564 | orchestrator | Saturday 28 March 2026 03:28:04 +0000 (0:00:01.226) 0:04:02.444 ******** 2026-03-28 03:28:14.996576 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:14.996587 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:14.996598 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:14.996635 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:14.996648 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:14.996661 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:14.996674 | orchestrator | 2026-03-28 03:28:14.996687 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-28 03:28:14.996699 | orchestrator | Saturday 28 March 2026 03:28:05 +0000 (0:00:00.798) 0:04:03.242 ******** 2026-03-28 03:28:14.996711 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:14.996724 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:14.996736 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:14.996749 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:28:14.996763 | orchestrator | 2026-03-28 03:28:14.996775 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 03:28:14.996788 | orchestrator | Saturday 28 March 2026 03:28:06 +0000 (0:00:00.877) 0:04:04.120 ******** 2026-03-28 03:28:14.996799 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-28 03:28:14.996810 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-28 03:28:14.996821 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-28 03:28:14.996832 | orchestrator | 2026-03-28 03:28:14.996843 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 03:28:14.996854 | orchestrator | Saturday 28 March 2026 03:28:07 +0000 (0:00:00.855) 0:04:04.976 ******** 2026-03-28 03:28:14.996865 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-28 03:28:14.996876 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-28 03:28:14.996887 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-28 03:28:14.996898 | orchestrator | 2026-03-28 03:28:14.996909 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 03:28:14.996920 | orchestrator | Saturday 28 March 2026 03:28:08 +0000 (0:00:01.208) 0:04:06.184 ******** 2026-03-28 03:28:14.996931 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-28 03:28:14.996941 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:14.996952 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-28 03:28:14.996963 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:14.996974 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-28 03:28:14.996985 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:14.996995 | orchestrator | 2026-03-28 03:28:14.997006 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-28 03:28:14.997017 | orchestrator | Saturday 28 March 2026 03:28:09 +0000 (0:00:00.630) 0:04:06.815 ******** 2026-03-28 03:28:14.997028 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 03:28:14.997039 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 03:28:14.997050 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 03:28:14.997061 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 03:28:14.997072 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:14.997100 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 03:28:14.997112 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 03:28:14.997123 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:14.997150 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 03:28:14.997163 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 03:28:14.997174 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 03:28:14.997185 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:14.997196 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 03:28:14.997215 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 03:28:14.997226 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 03:28:14.997237 | orchestrator | 2026-03-28 03:28:14.997248 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-28 03:28:14.997259 | orchestrator | Saturday 28 March 2026 03:28:10 +0000 (0:00:01.255) 0:04:08.071 ******** 2026-03-28 03:28:14.997270 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:14.997281 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:14.997292 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:14.997303 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:28:14.997314 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:28:14.997325 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:28:14.997336 | orchestrator | 2026-03-28 03:28:14.997347 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-28 03:28:14.997358 | orchestrator | Saturday 28 March 2026 03:28:11 +0000 (0:00:01.191) 0:04:09.262 ******** 2026-03-28 03:28:14.997368 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:14.997379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:14.997390 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:14.997401 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:28:14.997411 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:28:14.997422 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:28:14.997433 | orchestrator | 2026-03-28 03:28:14.997444 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 03:28:14.997455 | orchestrator | Saturday 28 March 2026 03:28:13 +0000 (0:00:01.679) 0:04:10.942 ******** 2026-03-28 03:28:14.997474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:14.997491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:14.997503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:14.997531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.176520 | orchestrator | 2026-03-28 03:28:20.176532 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 03:28:20.176557 | orchestrator | Saturday 28 March 2026 03:28:15 +0000 (0:00:02.241) 0:04:13.184 ******** 2026-03-28 03:28:20.176565 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:28:20.176572 | orchestrator | 2026-03-28 03:28:20.176578 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 03:28:20.176585 | orchestrator | Saturday 28 March 2026 03:28:16 +0000 (0:00:01.312) 0:04:14.496 ******** 2026-03-28 03:28:20.176598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:20.387395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:21.642569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:21.642667 | orchestrator | 2026-03-28 03:28:21.642682 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 03:28:21.642695 | orchestrator | Saturday 28 March 2026 03:28:20 +0000 (0:00:03.683) 0:04:18.179 ******** 2026-03-28 03:28:21.642714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:21.642765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:21.642787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:21.642804 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:21.642848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:21.642868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:21.642885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:21.642915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:21.642932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:21.642949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:21.642968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:21.642985 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:21.643010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:23.375070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:23.375265 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:23.375281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:23.375291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:23.375300 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:23.375308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:23.375317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:23.375325 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:23.375333 | orchestrator | 2026-03-28 03:28:23.375342 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 03:28:23.375351 | orchestrator | Saturday 28 March 2026 03:28:22 +0000 (0:00:01.655) 0:04:19.834 ******** 2026-03-28 03:28:23.375394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:23.375411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:23.375422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:23.375431 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:23.375439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:23.375448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:23.375460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:23.375468 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:23.375483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:34.991047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:34.991209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:34.991240 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:34.991264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:34.991285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:28:34.991306 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:34.991345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:34.991404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:34.991418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:34.991430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:28:34.991442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:28:34.991452 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:34.991464 | orchestrator | 2026-03-28 03:28:34.991476 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 03:28:34.991489 | orchestrator | Saturday 28 March 2026 03:28:24 +0000 (0:00:01.968) 0:04:21.803 ******** 2026-03-28 03:28:34.991500 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:34.991511 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:34.991522 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:34.991533 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:28:34.991545 | orchestrator | 2026-03-28 03:28:34.991556 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-28 03:28:34.991567 | orchestrator | Saturday 28 March 2026 03:28:25 +0000 (0:00:01.153) 0:04:22.957 ******** 2026-03-28 03:28:34.991578 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:28:34.991591 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:28:34.991604 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:28:34.991617 | orchestrator | 2026-03-28 03:28:34.991630 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-28 03:28:34.991643 | orchestrator | Saturday 28 March 2026 03:28:26 +0000 (0:00:01.193) 0:04:24.150 ******** 2026-03-28 03:28:34.991655 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:28:34.991668 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:28:34.991681 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:28:34.991693 | orchestrator | 2026-03-28 03:28:34.991703 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-28 03:28:34.991714 | orchestrator | Saturday 28 March 2026 03:28:27 +0000 (0:00:01.055) 0:04:25.206 ******** 2026-03-28 03:28:34.991733 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:28:34.991745 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:28:34.991756 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:28:34.991767 | orchestrator | 2026-03-28 03:28:34.991777 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-28 03:28:34.991788 | orchestrator | Saturday 28 March 2026 03:28:27 +0000 (0:00:00.581) 0:04:25.787 ******** 2026-03-28 03:28:34.991799 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:28:34.991810 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:28:34.991820 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:28:34.991831 | orchestrator | 2026-03-28 03:28:34.991842 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-28 03:28:34.991852 | orchestrator | Saturday 28 March 2026 03:28:28 +0000 (0:00:00.492) 0:04:26.280 ******** 2026-03-28 03:28:34.991863 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 03:28:34.991874 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 03:28:34.991885 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 03:28:34.991896 | orchestrator | 2026-03-28 03:28:34.991907 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-28 03:28:34.991918 | orchestrator | Saturday 28 March 2026 03:28:29 +0000 (0:00:01.400) 0:04:27.680 ******** 2026-03-28 03:28:34.991934 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 03:28:34.991945 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 03:28:34.991956 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 03:28:34.991967 | orchestrator | 2026-03-28 03:28:34.991977 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-28 03:28:34.991988 | orchestrator | Saturday 28 March 2026 03:28:31 +0000 (0:00:01.224) 0:04:28.905 ******** 2026-03-28 03:28:34.991999 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 03:28:34.992010 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 03:28:34.992020 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 03:28:34.992031 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-28 03:28:34.992042 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-28 03:28:34.992053 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-28 03:28:34.992064 | orchestrator | 2026-03-28 03:28:34.992133 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-28 03:28:49.747465 | orchestrator | Saturday 28 March 2026 03:28:34 +0000 (0:00:03.865) 0:04:32.771 ******** 2026-03-28 03:28:49.747555 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:49.747566 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:49.747573 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:49.747579 | orchestrator | 2026-03-28 03:28:49.747587 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-28 03:28:49.747594 | orchestrator | Saturday 28 March 2026 03:28:35 +0000 (0:00:00.321) 0:04:33.093 ******** 2026-03-28 03:28:49.747600 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:49.747607 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:49.747613 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:49.747619 | orchestrator | 2026-03-28 03:28:49.747626 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-28 03:28:49.747633 | orchestrator | Saturday 28 March 2026 03:28:35 +0000 (0:00:00.538) 0:04:33.632 ******** 2026-03-28 03:28:49.747639 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:28:49.747646 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:28:49.747652 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:28:49.747658 | orchestrator | 2026-03-28 03:28:49.747664 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-28 03:28:49.747671 | orchestrator | Saturday 28 March 2026 03:28:37 +0000 (0:00:01.262) 0:04:34.894 ******** 2026-03-28 03:28:49.747677 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 03:28:49.747705 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 03:28:49.747712 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-28 03:28:49.747718 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 03:28:49.747725 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 03:28:49.747735 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-28 03:28:49.747746 | orchestrator | 2026-03-28 03:28:49.747756 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-28 03:28:49.747766 | orchestrator | Saturday 28 March 2026 03:28:40 +0000 (0:00:03.250) 0:04:38.145 ******** 2026-03-28 03:28:49.747777 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:28:49.747787 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:28:49.747797 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:28:49.747808 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 03:28:49.747818 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:28:49.747828 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 03:28:49.747838 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:28:49.747848 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 03:28:49.747858 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:28:49.747868 | orchestrator | 2026-03-28 03:28:49.747879 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-28 03:28:49.747890 | orchestrator | Saturday 28 March 2026 03:28:43 +0000 (0:00:03.305) 0:04:41.451 ******** 2026-03-28 03:28:49.747901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:49.747913 | orchestrator | 2026-03-28 03:28:49.747921 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-28 03:28:49.747927 | orchestrator | Saturday 28 March 2026 03:28:43 +0000 (0:00:00.137) 0:04:41.588 ******** 2026-03-28 03:28:49.747934 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:49.747940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:49.747946 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:49.747953 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:49.747959 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:49.747965 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:49.747971 | orchestrator | 2026-03-28 03:28:49.747977 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-28 03:28:49.747984 | orchestrator | Saturday 28 March 2026 03:28:44 +0000 (0:00:00.865) 0:04:42.454 ******** 2026-03-28 03:28:49.747990 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:28:49.747996 | orchestrator | 2026-03-28 03:28:49.748003 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-28 03:28:49.748009 | orchestrator | Saturday 28 March 2026 03:28:45 +0000 (0:00:00.745) 0:04:43.200 ******** 2026-03-28 03:28:49.748027 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:28:49.748035 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:28:49.748042 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:28:49.748049 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:28:49.748057 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:28:49.748064 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:28:49.748070 | orchestrator | 2026-03-28 03:28:49.748078 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-28 03:28:49.748141 | orchestrator | Saturday 28 March 2026 03:28:46 +0000 (0:00:00.839) 0:04:44.039 ******** 2026-03-28 03:28:49.748180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:49.748191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:49.748200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:28:49.748209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:49.748222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:49.748240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428421 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:54.428539 | orchestrator | 2026-03-28 03:28:54.428548 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-28 03:28:54.428556 | orchestrator | Saturday 28 March 2026 03:28:49 +0000 (0:00:03.656) 0:04:47.695 ******** 2026-03-28 03:28:54.428565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:54.428579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:54.428598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:56.880675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:56.880762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:28:56.880773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:28:56.880781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:28:56.880920 | orchestrator | 2026-03-28 03:28:56.880931 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-28 03:28:56.880951 | orchestrator | Saturday 28 March 2026 03:28:56 +0000 (0:00:06.971) 0:04:54.667 ******** 2026-03-28 03:29:18.503559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:18.503741 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:18.503762 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:18.503774 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.503785 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.503797 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.503808 | orchestrator | 2026-03-28 03:29:18.503825 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-28 03:29:18.503846 | orchestrator | Saturday 28 March 2026 03:28:58 +0000 (0:00:01.435) 0:04:56.103 ******** 2026-03-28 03:29:18.503864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 03:29:18.503883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 03:29:18.503903 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 03:29:18.503922 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 03:29:18.503941 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 03:29:18.503953 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 03:29:18.503964 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 03:29:18.503977 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.503988 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 03:29:18.503999 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.504009 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 03:29:18.504020 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.504034 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 03:29:18.504047 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 03:29:18.504151 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 03:29:18.504166 | orchestrator | 2026-03-28 03:29:18.504180 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-28 03:29:18.504191 | orchestrator | Saturday 28 March 2026 03:29:02 +0000 (0:00:03.715) 0:04:59.819 ******** 2026-03-28 03:29:18.504202 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:18.504213 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:18.504224 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:18.504234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.504245 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.504255 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.504266 | orchestrator | 2026-03-28 03:29:18.504277 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-28 03:29:18.504288 | orchestrator | Saturday 28 March 2026 03:29:02 +0000 (0:00:00.652) 0:05:00.471 ******** 2026-03-28 03:29:18.504299 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 03:29:18.504311 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 03:29:18.504322 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 03:29:18.504333 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 03:29:18.504344 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504354 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 03:29:18.504385 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 03:29:18.504396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504407 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504418 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504428 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.504447 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504466 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.504484 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 03:29:18.504504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.504524 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504545 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504581 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504607 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504618 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504629 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 03:29:18.504639 | orchestrator | 2026-03-28 03:29:18.504650 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-28 03:29:18.504661 | orchestrator | Saturday 28 March 2026 03:29:08 +0000 (0:00:05.370) 0:05:05.841 ******** 2026-03-28 03:29:18.504684 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 03:29:18.504695 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 03:29:18.504705 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 03:29:18.504716 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:29:18.504727 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:29:18.504738 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 03:29:18.504749 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 03:29:18.504760 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 03:29:18.504771 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 03:29:18.504781 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 03:29:18.504792 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 03:29:18.504803 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 03:29:18.504813 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:29:18.504824 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 03:29:18.504835 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.504846 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 03:29:18.504856 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.504868 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:29:18.504878 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 03:29:18.504889 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.504900 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 03:29:18.504911 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:29:18.504921 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:29:18.504932 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 03:29:18.504942 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:29:18.504953 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:29:18.504964 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 03:29:18.504974 | orchestrator | 2026-03-28 03:29:18.504992 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-28 03:29:18.505003 | orchestrator | Saturday 28 March 2026 03:29:14 +0000 (0:00:06.884) 0:05:12.726 ******** 2026-03-28 03:29:18.505013 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:18.505024 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:18.505035 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:18.505046 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.505057 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.505067 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.505078 | orchestrator | 2026-03-28 03:29:18.505110 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-28 03:29:18.505121 | orchestrator | Saturday 28 March 2026 03:29:15 +0000 (0:00:00.891) 0:05:13.617 ******** 2026-03-28 03:29:18.505132 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:18.505150 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:18.505161 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:18.505172 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.505182 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.505193 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.505204 | orchestrator | 2026-03-28 03:29:18.505214 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-28 03:29:18.505225 | orchestrator | Saturday 28 March 2026 03:29:16 +0000 (0:00:00.678) 0:05:14.295 ******** 2026-03-28 03:29:18.505236 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:29:18.505247 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:18.505258 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:18.505269 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:18.505279 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:29:18.505290 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:29:18.505301 | orchestrator | 2026-03-28 03:29:18.505318 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-28 03:29:19.683798 | orchestrator | Saturday 28 March 2026 03:29:18 +0000 (0:00:01.984) 0:05:16.280 ******** 2026-03-28 03:29:19.683885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:29:19.683898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:29:19.683905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:29:19.683926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:29:19.683963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 03:29:19.683970 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:19.683977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:29:19.683983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 03:29:19.683989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:29:19.683995 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:19.684001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 03:29:19.684020 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:19.684027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:29:19.684038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:29:23.296718 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:23.296876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:29:23.296924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:29:23.296949 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:23.296966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 03:29:23.296979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 03:29:23.297018 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:23.297031 | orchestrator | 2026-03-28 03:29:23.297043 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-28 03:29:23.297056 | orchestrator | Saturday 28 March 2026 03:29:19 +0000 (0:00:01.431) 0:05:17.711 ******** 2026-03-28 03:29:23.297067 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 03:29:23.297079 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297161 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:29:23.297174 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 03:29:23.297185 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297195 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:29:23.297206 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 03:29:23.297217 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297228 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:29:23.297239 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 03:29:23.297250 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297263 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:29:23.297276 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 03:29:23.297288 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297301 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:29:23.297313 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 03:29:23.297325 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 03:29:23.297337 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:29:23.297350 | orchestrator | 2026-03-28 03:29:23.297362 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-28 03:29:23.297375 | orchestrator | Saturday 28 March 2026 03:29:20 +0000 (0:00:00.930) 0:05:18.641 ******** 2026-03-28 03:29:23.297413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:29:23.297429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:29:23.297452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 03:29:23.297471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:29:23.297485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:29:23.297508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 03:30:18.792772 | orchestrator | 2026-03-28 03:30:18.792780 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 03:30:18.792788 | orchestrator | Saturday 28 March 2026 03:29:23 +0000 (0:00:02.716) 0:05:21.357 ******** 2026-03-28 03:30:18.792795 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:30:18.792802 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:30:18.792808 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:30:18.792814 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:30:18.792821 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:30:18.792827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:30:18.792833 | orchestrator | 2026-03-28 03:30:18.792840 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792846 | orchestrator | Saturday 28 March 2026 03:29:24 +0000 (0:00:00.863) 0:05:22.221 ******** 2026-03-28 03:30:18.792853 | orchestrator | 2026-03-28 03:30:18.792859 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792866 | orchestrator | Saturday 28 March 2026 03:29:24 +0000 (0:00:00.169) 0:05:22.390 ******** 2026-03-28 03:30:18.792872 | orchestrator | 2026-03-28 03:30:18.792879 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792889 | orchestrator | Saturday 28 March 2026 03:29:24 +0000 (0:00:00.148) 0:05:22.539 ******** 2026-03-28 03:30:18.792895 | orchestrator | 2026-03-28 03:30:18.792902 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792908 | orchestrator | Saturday 28 March 2026 03:29:24 +0000 (0:00:00.147) 0:05:22.686 ******** 2026-03-28 03:30:18.792915 | orchestrator | 2026-03-28 03:30:18.792921 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792927 | orchestrator | Saturday 28 March 2026 03:29:25 +0000 (0:00:00.141) 0:05:22.828 ******** 2026-03-28 03:30:18.792934 | orchestrator | 2026-03-28 03:30:18.792940 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 03:30:18.792947 | orchestrator | Saturday 28 March 2026 03:29:25 +0000 (0:00:00.304) 0:05:23.132 ******** 2026-03-28 03:30:18.792953 | orchestrator | 2026-03-28 03:30:18.792959 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-28 03:30:18.792966 | orchestrator | Saturday 28 March 2026 03:29:25 +0000 (0:00:00.139) 0:05:23.271 ******** 2026-03-28 03:30:18.792972 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:30:18.792978 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:30:18.792985 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:30:18.792991 | orchestrator | 2026-03-28 03:30:18.792997 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-28 03:30:18.793004 | orchestrator | Saturday 28 March 2026 03:29:32 +0000 (0:00:07.279) 0:05:30.551 ******** 2026-03-28 03:30:18.793010 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:30:18.793016 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:30:18.793022 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:30:18.793029 | orchestrator | 2026-03-28 03:30:18.793035 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-28 03:30:18.793047 | orchestrator | Saturday 28 March 2026 03:29:52 +0000 (0:00:19.609) 0:05:50.160 ******** 2026-03-28 03:30:18.793053 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:30:18.793059 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:30:18.793066 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:30:18.793072 | orchestrator | 2026-03-28 03:30:18.793082 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-28 03:32:44.461338 | orchestrator | Saturday 28 March 2026 03:30:18 +0000 (0:00:26.409) 0:06:16.570 ******** 2026-03-28 03:32:44.461488 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:32:44.461505 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:32:44.461518 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:32:44.461529 | orchestrator | 2026-03-28 03:32:44.461541 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-28 03:32:44.461553 | orchestrator | Saturday 28 March 2026 03:31:00 +0000 (0:00:41.908) 0:06:58.479 ******** 2026-03-28 03:32:44.461564 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:32:44.461575 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:32:44.461586 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:32:44.461596 | orchestrator | 2026-03-28 03:32:44.461607 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-28 03:32:44.461618 | orchestrator | Saturday 28 March 2026 03:31:01 +0000 (0:00:00.824) 0:06:59.303 ******** 2026-03-28 03:32:44.461629 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:32:44.461640 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:32:44.461650 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:32:44.461661 | orchestrator | 2026-03-28 03:32:44.461672 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-28 03:32:44.461682 | orchestrator | Saturday 28 March 2026 03:31:02 +0000 (0:00:00.811) 0:07:00.114 ******** 2026-03-28 03:32:44.461693 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:32:44.461704 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:32:44.461715 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:32:44.461726 | orchestrator | 2026-03-28 03:32:44.461736 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-28 03:32:44.461748 | orchestrator | Saturday 28 March 2026 03:31:33 +0000 (0:00:31.051) 0:07:31.166 ******** 2026-03-28 03:32:44.461759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:32:44.461769 | orchestrator | 2026-03-28 03:32:44.461780 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-28 03:32:44.461791 | orchestrator | Saturday 28 March 2026 03:31:33 +0000 (0:00:00.148) 0:07:31.314 ******** 2026-03-28 03:32:44.461801 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:32:44.461812 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:32:44.461822 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.461833 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.461844 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.461855 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-28 03:32:44.461870 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:32:44.461882 | orchestrator | 2026-03-28 03:32:44.461895 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-28 03:32:44.461907 | orchestrator | Saturday 28 March 2026 03:31:55 +0000 (0:00:22.348) 0:07:53.663 ******** 2026-03-28 03:32:44.461920 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:32:44.461932 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.461945 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:32:44.461958 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:32:44.461971 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.461983 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.461997 | orchestrator | 2026-03-28 03:32:44.462009 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-28 03:32:44.462132 | orchestrator | Saturday 28 March 2026 03:32:05 +0000 (0:00:09.855) 0:08:03.518 ******** 2026-03-28 03:32:44.462152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:32:44.462184 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:32:44.462198 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.462212 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.462232 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.462247 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-28 03:32:44.462258 | orchestrator | 2026-03-28 03:32:44.462282 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 03:32:44.462294 | orchestrator | Saturday 28 March 2026 03:32:10 +0000 (0:00:04.363) 0:08:07.882 ******** 2026-03-28 03:32:44.462305 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:32:44.462316 | orchestrator | 2026-03-28 03:32:44.462327 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 03:32:44.462338 | orchestrator | Saturday 28 March 2026 03:32:23 +0000 (0:00:13.378) 0:08:21.261 ******** 2026-03-28 03:32:44.462349 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:32:44.462360 | orchestrator | 2026-03-28 03:32:44.462371 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-28 03:32:44.462382 | orchestrator | Saturday 28 March 2026 03:32:25 +0000 (0:00:01.586) 0:08:22.847 ******** 2026-03-28 03:32:44.462393 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:32:44.462404 | orchestrator | 2026-03-28 03:32:44.462415 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-28 03:32:44.462426 | orchestrator | Saturday 28 March 2026 03:32:26 +0000 (0:00:01.584) 0:08:24.431 ******** 2026-03-28 03:32:44.462437 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 03:32:44.462448 | orchestrator | 2026-03-28 03:32:44.462459 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-28 03:32:44.462470 | orchestrator | Saturday 28 March 2026 03:32:38 +0000 (0:00:12.119) 0:08:36.550 ******** 2026-03-28 03:32:44.462481 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:32:44.462493 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:32:44.462504 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:32:44.462515 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:32:44.462526 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:32:44.462536 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:32:44.462547 | orchestrator | 2026-03-28 03:32:44.462558 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-28 03:32:44.462569 | orchestrator | 2026-03-28 03:32:44.462580 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-28 03:32:44.462609 | orchestrator | Saturday 28 March 2026 03:32:40 +0000 (0:00:01.836) 0:08:38.386 ******** 2026-03-28 03:32:44.462621 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:32:44.462632 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:32:44.462643 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:32:44.462654 | orchestrator | 2026-03-28 03:32:44.462665 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-28 03:32:44.462676 | orchestrator | 2026-03-28 03:32:44.462687 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-28 03:32:44.462698 | orchestrator | Saturday 28 March 2026 03:32:41 +0000 (0:00:01.004) 0:08:39.391 ******** 2026-03-28 03:32:44.462709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.462720 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.462731 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.462742 | orchestrator | 2026-03-28 03:32:44.462753 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-28 03:32:44.462763 | orchestrator | 2026-03-28 03:32:44.462774 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-28 03:32:44.462785 | orchestrator | Saturday 28 March 2026 03:32:42 +0000 (0:00:00.764) 0:08:40.156 ******** 2026-03-28 03:32:44.462805 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-28 03:32:44.462816 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 03:32:44.462827 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 03:32:44.462839 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-28 03:32:44.462849 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-28 03:32:44.462870 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.462882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:32:44.462893 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-28 03:32:44.462904 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 03:32:44.462915 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 03:32:44.462926 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-28 03:32:44.462937 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-28 03:32:44.462948 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.462959 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:32:44.462969 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-28 03:32:44.462980 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 03:32:44.462991 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 03:32:44.463002 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-28 03:32:44.463013 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-28 03:32:44.463024 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.463034 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:32:44.463045 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-28 03:32:44.463056 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 03:32:44.463067 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 03:32:44.463078 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-28 03:32:44.463089 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-28 03:32:44.463117 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.463129 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.463140 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-28 03:32:44.463151 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 03:32:44.463168 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 03:32:44.463179 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-28 03:32:44.463190 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-28 03:32:44.463201 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.463212 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.463223 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-28 03:32:44.463234 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 03:32:44.463245 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 03:32:44.463256 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-28 03:32:44.463267 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-28 03:32:44.463277 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-28 03:32:44.463289 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.463300 | orchestrator | 2026-03-28 03:32:44.463311 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-28 03:32:44.463322 | orchestrator | 2026-03-28 03:32:44.463333 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-28 03:32:44.463351 | orchestrator | Saturday 28 March 2026 03:32:43 +0000 (0:00:01.481) 0:08:41.638 ******** 2026-03-28 03:32:44.463362 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-28 03:32:44.463373 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-28 03:32:44.463383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:44.463394 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-28 03:32:44.463405 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-28 03:32:44.463416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:44.463427 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-28 03:32:44.463438 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-28 03:32:44.463449 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:44.463460 | orchestrator | 2026-03-28 03:32:44.463477 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-28 03:32:46.383169 | orchestrator | 2026-03-28 03:32:46.383295 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-28 03:32:46.383313 | orchestrator | Saturday 28 March 2026 03:32:44 +0000 (0:00:00.603) 0:08:42.241 ******** 2026-03-28 03:32:46.383324 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:46.383336 | orchestrator | 2026-03-28 03:32:46.383346 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-28 03:32:46.383356 | orchestrator | 2026-03-28 03:32:46.383366 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-28 03:32:46.383376 | orchestrator | Saturday 28 March 2026 03:32:45 +0000 (0:00:00.972) 0:08:43.214 ******** 2026-03-28 03:32:46.383386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:32:46.383396 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:32:46.383405 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:32:46.383415 | orchestrator | 2026-03-28 03:32:46.383425 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:32:46.383435 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:32:46.383447 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-28 03:32:46.383457 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-28 03:32:46.383467 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-28 03:32:46.383477 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 03:32:46.383486 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-28 03:32:46.383496 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 03:32:46.383505 | orchestrator | 2026-03-28 03:32:46.383515 | orchestrator | 2026-03-28 03:32:46.383525 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:32:46.383535 | orchestrator | Saturday 28 March 2026 03:32:45 +0000 (0:00:00.503) 0:08:43.717 ******** 2026-03-28 03:32:46.383545 | orchestrator | =============================================================================== 2026-03-28 03:32:46.383554 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.91s 2026-03-28 03:32:46.383564 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.72s 2026-03-28 03:32:46.383574 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.05s 2026-03-28 03:32:46.383610 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.41s 2026-03-28 03:32:46.383620 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.35s 2026-03-28 03:32:46.383631 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.35s 2026-03-28 03:32:46.383641 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.61s 2026-03-28 03:32:46.383664 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.85s 2026-03-28 03:32:46.383676 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.07s 2026-03-28 03:32:46.383687 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.80s 2026-03-28 03:32:46.383698 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.38s 2026-03-28 03:32:46.383709 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.46s 2026-03-28 03:32:46.383720 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.40s 2026-03-28 03:32:46.383731 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.12s 2026-03-28 03:32:46.383743 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.02s 2026-03-28 03:32:46.383754 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.86s 2026-03-28 03:32:46.383765 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.69s 2026-03-28 03:32:46.383776 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.51s 2026-03-28 03:32:46.383787 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.28s 2026-03-28 03:32:46.383798 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.27s 2026-03-28 03:32:48.884625 | orchestrator | 2026-03-28 03:32:48 | INFO  | Task 4aeeb375-86e8-4b7f-be27-9f4e2ac7114c (horizon) was prepared for execution. 2026-03-28 03:32:48.884755 | orchestrator | 2026-03-28 03:32:48 | INFO  | It takes a moment until task 4aeeb375-86e8-4b7f-be27-9f4e2ac7114c (horizon) has been started and output is visible here. 2026-03-28 03:32:56.530275 | orchestrator | 2026-03-28 03:32:56.530356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:32:56.530363 | orchestrator | 2026-03-28 03:32:56.530368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:32:56.530373 | orchestrator | Saturday 28 March 2026 03:32:53 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-03-28 03:32:56.530377 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:32:56.530381 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:32:56.530385 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:32:56.530389 | orchestrator | 2026-03-28 03:32:56.530393 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:32:56.530397 | orchestrator | Saturday 28 March 2026 03:32:53 +0000 (0:00:00.317) 0:00:00.618 ******** 2026-03-28 03:32:56.530401 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-28 03:32:56.530406 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-28 03:32:56.530410 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-28 03:32:56.530413 | orchestrator | 2026-03-28 03:32:56.530418 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-28 03:32:56.530422 | orchestrator | 2026-03-28 03:32:56.530425 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 03:32:56.530429 | orchestrator | Saturday 28 March 2026 03:32:54 +0000 (0:00:00.477) 0:00:01.095 ******** 2026-03-28 03:32:56.530434 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:32:56.530438 | orchestrator | 2026-03-28 03:32:56.530442 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-28 03:32:56.530446 | orchestrator | Saturday 28 March 2026 03:32:54 +0000 (0:00:00.540) 0:00:01.636 ******** 2026-03-28 03:32:56.530478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:32:56.530496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:32:56.530509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:32:56.530513 | orchestrator | 2026-03-28 03:32:56.530517 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-28 03:32:56.530521 | orchestrator | Saturday 28 March 2026 03:32:55 +0000 (0:00:01.197) 0:00:02.834 ******** 2026-03-28 03:32:56.530525 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:32:56.530529 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:32:56.530533 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:32:56.530536 | orchestrator | 2026-03-28 03:32:56.530540 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 03:32:56.530544 | orchestrator | Saturday 28 March 2026 03:32:56 +0000 (0:00:00.525) 0:00:03.359 ******** 2026-03-28 03:32:56.530550 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 03:33:03.099170 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 03:33:03.099270 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 03:33:03.099283 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 03:33:03.099293 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 03:33:03.099302 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 03:33:03.099316 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-28 03:33:03.099331 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 03:33:03.099372 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 03:33:03.099390 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 03:33:03.099405 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 03:33:03.099420 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 03:33:03.099431 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 03:33:03.099440 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 03:33:03.099448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-28 03:33:03.099457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 03:33:03.099466 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 03:33:03.099474 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 03:33:03.099483 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 03:33:03.099492 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 03:33:03.099500 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 03:33:03.099509 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 03:33:03.099517 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-28 03:33:03.099526 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 03:33:03.099536 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-28 03:33:03.099547 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-28 03:33:03.099556 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-28 03:33:03.099564 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-28 03:33:03.099587 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-28 03:33:03.099596 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-28 03:33:03.099605 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-28 03:33:03.099614 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-28 03:33:03.099623 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-28 03:33:03.099633 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-28 03:33:03.099642 | orchestrator | 2026-03-28 03:33:03.099654 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.099665 | orchestrator | Saturday 28 March 2026 03:32:57 +0000 (0:00:00.883) 0:00:04.243 ******** 2026-03-28 03:33:03.099676 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.099694 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.099704 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.099714 | orchestrator | 2026-03-28 03:33:03.099725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.099736 | orchestrator | Saturday 28 March 2026 03:32:57 +0000 (0:00:00.347) 0:00:04.591 ******** 2026-03-28 03:33:03.099746 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.099758 | orchestrator | 2026-03-28 03:33:03.099784 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.099796 | orchestrator | Saturday 28 March 2026 03:32:57 +0000 (0:00:00.322) 0:00:04.914 ******** 2026-03-28 03:33:03.099806 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.099817 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.099827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.099837 | orchestrator | 2026-03-28 03:33:03.099847 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.099857 | orchestrator | Saturday 28 March 2026 03:32:58 +0000 (0:00:00.325) 0:00:05.239 ******** 2026-03-28 03:33:03.099868 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.099878 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.099888 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.099899 | orchestrator | 2026-03-28 03:33:03.099909 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.099920 | orchestrator | Saturday 28 March 2026 03:32:58 +0000 (0:00:00.475) 0:00:05.715 ******** 2026-03-28 03:33:03.099929 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.099937 | orchestrator | 2026-03-28 03:33:03.099946 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.099955 | orchestrator | Saturday 28 March 2026 03:32:58 +0000 (0:00:00.168) 0:00:05.883 ******** 2026-03-28 03:33:03.099963 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.099973 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.099981 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.099990 | orchestrator | 2026-03-28 03:33:03.099999 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.100008 | orchestrator | Saturday 28 March 2026 03:32:59 +0000 (0:00:00.313) 0:00:06.197 ******** 2026-03-28 03:33:03.100017 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.100025 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.100034 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.100042 | orchestrator | 2026-03-28 03:33:03.100051 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.100060 | orchestrator | Saturday 28 March 2026 03:32:59 +0000 (0:00:00.546) 0:00:06.744 ******** 2026-03-28 03:33:03.100068 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100077 | orchestrator | 2026-03-28 03:33:03.100086 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.100094 | orchestrator | Saturday 28 March 2026 03:32:59 +0000 (0:00:00.136) 0:00:06.880 ******** 2026-03-28 03:33:03.100130 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100139 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.100147 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.100156 | orchestrator | 2026-03-28 03:33:03.100165 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.100173 | orchestrator | Saturday 28 March 2026 03:33:00 +0000 (0:00:00.319) 0:00:07.200 ******** 2026-03-28 03:33:03.100182 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.100190 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.100198 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.100207 | orchestrator | 2026-03-28 03:33:03.100218 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.100233 | orchestrator | Saturday 28 March 2026 03:33:00 +0000 (0:00:00.329) 0:00:07.529 ******** 2026-03-28 03:33:03.100248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100263 | orchestrator | 2026-03-28 03:33:03.100286 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.100296 | orchestrator | Saturday 28 March 2026 03:33:00 +0000 (0:00:00.147) 0:00:07.676 ******** 2026-03-28 03:33:03.100304 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100313 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.100321 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.100330 | orchestrator | 2026-03-28 03:33:03.100339 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.100348 | orchestrator | Saturday 28 March 2026 03:33:01 +0000 (0:00:00.537) 0:00:08.214 ******** 2026-03-28 03:33:03.100363 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.100378 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.100401 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.100416 | orchestrator | 2026-03-28 03:33:03.100430 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.100445 | orchestrator | Saturday 28 March 2026 03:33:01 +0000 (0:00:00.363) 0:00:08.578 ******** 2026-03-28 03:33:03.100454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100463 | orchestrator | 2026-03-28 03:33:03.100471 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.100480 | orchestrator | Saturday 28 March 2026 03:33:01 +0000 (0:00:00.155) 0:00:08.734 ******** 2026-03-28 03:33:03.100488 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100497 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.100505 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.100513 | orchestrator | 2026-03-28 03:33:03.100522 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.100531 | orchestrator | Saturday 28 March 2026 03:33:02 +0000 (0:00:00.314) 0:00:09.048 ******** 2026-03-28 03:33:03.100539 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:03.100548 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:03.100556 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:03.100565 | orchestrator | 2026-03-28 03:33:03.100573 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:03.100582 | orchestrator | Saturday 28 March 2026 03:33:02 +0000 (0:00:00.332) 0:00:09.381 ******** 2026-03-28 03:33:03.100590 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100599 | orchestrator | 2026-03-28 03:33:03.100607 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:03.100615 | orchestrator | Saturday 28 March 2026 03:33:02 +0000 (0:00:00.348) 0:00:09.729 ******** 2026-03-28 03:33:03.100624 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:03.100633 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:03.100641 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:03.100649 | orchestrator | 2026-03-28 03:33:03.100658 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:03.100673 | orchestrator | Saturday 28 March 2026 03:33:03 +0000 (0:00:00.331) 0:00:10.061 ******** 2026-03-28 03:33:17.670134 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:17.670255 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:17.670272 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:17.670285 | orchestrator | 2026-03-28 03:33:17.670298 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:17.670310 | orchestrator | Saturday 28 March 2026 03:33:03 +0000 (0:00:00.330) 0:00:10.391 ******** 2026-03-28 03:33:17.670322 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670334 | orchestrator | 2026-03-28 03:33:17.670345 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:17.670356 | orchestrator | Saturday 28 March 2026 03:33:03 +0000 (0:00:00.141) 0:00:10.533 ******** 2026-03-28 03:33:17.670367 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670378 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.670389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.670400 | orchestrator | 2026-03-28 03:33:17.670411 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:17.670448 | orchestrator | Saturday 28 March 2026 03:33:03 +0000 (0:00:00.319) 0:00:10.852 ******** 2026-03-28 03:33:17.670460 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:17.670471 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:17.670482 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:17.670493 | orchestrator | 2026-03-28 03:33:17.670504 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:17.670518 | orchestrator | Saturday 28 March 2026 03:33:04 +0000 (0:00:00.564) 0:00:11.416 ******** 2026-03-28 03:33:17.670536 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670554 | orchestrator | 2026-03-28 03:33:17.670573 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:17.670592 | orchestrator | Saturday 28 March 2026 03:33:04 +0000 (0:00:00.139) 0:00:11.555 ******** 2026-03-28 03:33:17.670610 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670628 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.670648 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.670667 | orchestrator | 2026-03-28 03:33:17.670686 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:17.670706 | orchestrator | Saturday 28 March 2026 03:33:04 +0000 (0:00:00.310) 0:00:11.866 ******** 2026-03-28 03:33:17.670726 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:17.670741 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:17.670753 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:17.670764 | orchestrator | 2026-03-28 03:33:17.670775 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:17.670786 | orchestrator | Saturday 28 March 2026 03:33:05 +0000 (0:00:00.357) 0:00:12.224 ******** 2026-03-28 03:33:17.670797 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670808 | orchestrator | 2026-03-28 03:33:17.670818 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:17.670829 | orchestrator | Saturday 28 March 2026 03:33:05 +0000 (0:00:00.144) 0:00:12.368 ******** 2026-03-28 03:33:17.670840 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670852 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.670862 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.670873 | orchestrator | 2026-03-28 03:33:17.670884 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 03:33:17.670895 | orchestrator | Saturday 28 March 2026 03:33:05 +0000 (0:00:00.520) 0:00:12.888 ******** 2026-03-28 03:33:17.670906 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:33:17.670916 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:33:17.670927 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:33:17.670938 | orchestrator | 2026-03-28 03:33:17.670949 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 03:33:17.670960 | orchestrator | Saturday 28 March 2026 03:33:06 +0000 (0:00:00.345) 0:00:13.234 ******** 2026-03-28 03:33:17.670971 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.670981 | orchestrator | 2026-03-28 03:33:17.670992 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 03:33:17.671003 | orchestrator | Saturday 28 March 2026 03:33:06 +0000 (0:00:00.153) 0:00:13.387 ******** 2026-03-28 03:33:17.671014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.671039 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.671051 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.671062 | orchestrator | 2026-03-28 03:33:17.671073 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-28 03:33:17.671084 | orchestrator | Saturday 28 March 2026 03:33:06 +0000 (0:00:00.313) 0:00:13.701 ******** 2026-03-28 03:33:17.671095 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:33:17.671162 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:33:17.671181 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:33:17.671201 | orchestrator | 2026-03-28 03:33:17.671220 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-28 03:33:17.671250 | orchestrator | Saturday 28 March 2026 03:33:08 +0000 (0:00:01.944) 0:00:15.646 ******** 2026-03-28 03:33:17.671262 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 03:33:17.671273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 03:33:17.671284 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 03:33:17.671295 | orchestrator | 2026-03-28 03:33:17.671306 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-28 03:33:17.671316 | orchestrator | Saturday 28 March 2026 03:33:10 +0000 (0:00:01.941) 0:00:17.588 ******** 2026-03-28 03:33:17.671327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 03:33:17.671338 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 03:33:17.671349 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 03:33:17.671360 | orchestrator | 2026-03-28 03:33:17.671371 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-28 03:33:17.671402 | orchestrator | Saturday 28 March 2026 03:33:12 +0000 (0:00:01.866) 0:00:19.454 ******** 2026-03-28 03:33:17.671414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 03:33:17.671424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 03:33:17.671435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 03:33:17.671446 | orchestrator | 2026-03-28 03:33:17.671457 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-28 03:33:17.671467 | orchestrator | Saturday 28 March 2026 03:33:14 +0000 (0:00:01.608) 0:00:21.062 ******** 2026-03-28 03:33:17.671478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.671489 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.671500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.671510 | orchestrator | 2026-03-28 03:33:17.671521 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-28 03:33:17.671532 | orchestrator | Saturday 28 March 2026 03:33:14 +0000 (0:00:00.537) 0:00:21.600 ******** 2026-03-28 03:33:17.671543 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:17.671553 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:17.671564 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:17.671575 | orchestrator | 2026-03-28 03:33:17.671586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 03:33:17.671597 | orchestrator | Saturday 28 March 2026 03:33:14 +0000 (0:00:00.348) 0:00:21.949 ******** 2026-03-28 03:33:17.671607 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:33:17.671618 | orchestrator | 2026-03-28 03:33:17.671630 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-28 03:33:17.671640 | orchestrator | Saturday 28 March 2026 03:33:15 +0000 (0:00:00.650) 0:00:22.599 ******** 2026-03-28 03:33:17.671665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:33:17.671711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:33:18.320248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:33:18.320356 | orchestrator | 2026-03-28 03:33:18.320368 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-28 03:33:18.320378 | orchestrator | Saturday 28 March 2026 03:33:17 +0000 (0:00:02.024) 0:00:24.623 ******** 2026-03-28 03:33:18.320402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:18.320419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:18.320434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:18.320442 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:18.320456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:20.842492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:20.842598 | orchestrator | 2026-03-28 03:33:20.842624 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-28 03:33:20.842646 | orchestrator | Saturday 28 March 2026 03:33:18 +0000 (0:00:00.656) 0:00:25.280 ******** 2026-03-28 03:33:20.842702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:20.842722 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:33:20.842756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:20.842792 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:33:20.842806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 03:33:20.842817 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:33:20.842828 | orchestrator | 2026-03-28 03:33:20.842839 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-28 03:33:20.842890 | orchestrator | Saturday 28 March 2026 03:33:19 +0000 (0:00:00.867) 0:00:26.148 ******** 2026-03-28 03:33:20.842920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:34:10.736786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:34:10.736932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 03:34:10.736947 | orchestrator | 2026-03-28 03:34:10.736957 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 03:34:10.736966 | orchestrator | Saturday 28 March 2026 03:33:20 +0000 (0:00:01.653) 0:00:27.802 ******** 2026-03-28 03:34:10.736974 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:34:10.736982 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:34:10.736989 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:34:10.736997 | orchestrator | 2026-03-28 03:34:10.737004 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 03:34:10.737011 | orchestrator | Saturday 28 March 2026 03:33:21 +0000 (0:00:00.327) 0:00:28.129 ******** 2026-03-28 03:34:10.737019 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:34:10.737027 | orchestrator | 2026-03-28 03:34:10.737034 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-28 03:34:10.737042 | orchestrator | Saturday 28 March 2026 03:33:21 +0000 (0:00:00.552) 0:00:28.681 ******** 2026-03-28 03:34:10.737049 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:34:10.737056 | orchestrator | 2026-03-28 03:34:10.737063 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-28 03:34:10.737070 | orchestrator | Saturday 28 March 2026 03:33:23 +0000 (0:00:02.201) 0:00:30.882 ******** 2026-03-28 03:34:10.737077 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:34:10.737085 | orchestrator | 2026-03-28 03:34:10.737092 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-28 03:34:10.737099 | orchestrator | Saturday 28 March 2026 03:33:26 +0000 (0:00:02.688) 0:00:33.571 ******** 2026-03-28 03:34:10.737136 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:34:10.737152 | orchestrator | 2026-03-28 03:34:10.737167 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 03:34:10.737174 | orchestrator | Saturday 28 March 2026 03:33:43 +0000 (0:00:16.771) 0:00:50.342 ******** 2026-03-28 03:34:10.737181 | orchestrator | 2026-03-28 03:34:10.737188 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 03:34:10.737196 | orchestrator | Saturday 28 March 2026 03:33:43 +0000 (0:00:00.072) 0:00:50.415 ******** 2026-03-28 03:34:10.737203 | orchestrator | 2026-03-28 03:34:10.737210 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 03:34:10.737217 | orchestrator | Saturday 28 March 2026 03:33:43 +0000 (0:00:00.073) 0:00:50.488 ******** 2026-03-28 03:34:10.737224 | orchestrator | 2026-03-28 03:34:10.737232 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-28 03:34:10.737239 | orchestrator | Saturday 28 March 2026 03:33:43 +0000 (0:00:00.080) 0:00:50.569 ******** 2026-03-28 03:34:10.737247 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:34:10.737254 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:34:10.737261 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:34:10.737268 | orchestrator | 2026-03-28 03:34:10.737275 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:34:10.737284 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 03:34:10.737293 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 03:34:10.737300 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-28 03:34:10.737307 | orchestrator | 2026-03-28 03:34:10.737315 | orchestrator | 2026-03-28 03:34:10.737322 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:34:10.737331 | orchestrator | Saturday 28 March 2026 03:34:10 +0000 (0:00:27.109) 0:01:17.678 ******** 2026-03-28 03:34:10.737340 | orchestrator | =============================================================================== 2026-03-28 03:34:10.737349 | orchestrator | horizon : Restart horizon container ------------------------------------ 27.11s 2026-03-28 03:34:10.737357 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.77s 2026-03-28 03:34:10.737365 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.69s 2026-03-28 03:34:10.737373 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.20s 2026-03-28 03:34:10.737381 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.02s 2026-03-28 03:34:10.737394 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.94s 2026-03-28 03:34:10.737403 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.94s 2026-03-28 03:34:10.737411 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.87s 2026-03-28 03:34:10.737420 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-03-28 03:34:10.737428 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.61s 2026-03-28 03:34:10.737437 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-03-28 03:34:10.737449 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-03-28 03:34:10.737461 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2026-03-28 03:34:10.737481 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-03-28 03:34:11.189695 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-28 03:34:11.189780 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-03-28 03:34:11.189794 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-03-28 03:34:11.189831 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-03-28 03:34:11.189842 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-03-28 03:34:11.189853 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-03-28 03:34:13.688997 | orchestrator | 2026-03-28 03:34:13 | INFO  | Task 14b587e1-fee4-43e5-832f-970e83c30e81 (skyline) was prepared for execution. 2026-03-28 03:34:13.689068 | orchestrator | 2026-03-28 03:34:13 | INFO  | It takes a moment until task 14b587e1-fee4-43e5-832f-970e83c30e81 (skyline) has been started and output is visible here. 2026-03-28 03:34:45.338660 | orchestrator | 2026-03-28 03:34:45.338787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:34:45.338812 | orchestrator | 2026-03-28 03:34:45.338828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:34:45.338844 | orchestrator | Saturday 28 March 2026 03:34:18 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-28 03:34:45.338860 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:34:45.338876 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:34:45.338890 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:34:45.338905 | orchestrator | 2026-03-28 03:34:45.338921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:34:45.338936 | orchestrator | Saturday 28 March 2026 03:34:18 +0000 (0:00:00.336) 0:00:00.603 ******** 2026-03-28 03:34:45.338952 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-28 03:34:45.338968 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-28 03:34:45.338983 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-28 03:34:45.338999 | orchestrator | 2026-03-28 03:34:45.339014 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-28 03:34:45.339029 | orchestrator | 2026-03-28 03:34:45.339045 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-28 03:34:45.339058 | orchestrator | Saturday 28 March 2026 03:34:18 +0000 (0:00:00.491) 0:00:01.094 ******** 2026-03-28 03:34:45.339068 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:34:45.339078 | orchestrator | 2026-03-28 03:34:45.339087 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-28 03:34:45.339096 | orchestrator | Saturday 28 March 2026 03:34:19 +0000 (0:00:00.610) 0:00:01.705 ******** 2026-03-28 03:34:45.339154 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-28 03:34:45.339166 | orchestrator | 2026-03-28 03:34:45.339175 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-28 03:34:45.339184 | orchestrator | Saturday 28 March 2026 03:34:22 +0000 (0:00:03.428) 0:00:05.134 ******** 2026-03-28 03:34:45.339193 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-28 03:34:45.339205 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-28 03:34:45.339220 | orchestrator | 2026-03-28 03:34:45.339236 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-28 03:34:45.339251 | orchestrator | Saturday 28 March 2026 03:34:29 +0000 (0:00:06.757) 0:00:11.891 ******** 2026-03-28 03:34:45.339267 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:34:45.339285 | orchestrator | 2026-03-28 03:34:45.339301 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-28 03:34:45.339318 | orchestrator | Saturday 28 March 2026 03:34:32 +0000 (0:00:03.216) 0:00:15.107 ******** 2026-03-28 03:34:45.339331 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:34:45.339342 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-28 03:34:45.339352 | orchestrator | 2026-03-28 03:34:45.339364 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-28 03:34:45.339401 | orchestrator | Saturday 28 March 2026 03:34:37 +0000 (0:00:04.080) 0:00:19.188 ******** 2026-03-28 03:34:45.339412 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:34:45.339423 | orchestrator | 2026-03-28 03:34:45.339434 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-28 03:34:45.339444 | orchestrator | Saturday 28 March 2026 03:34:40 +0000 (0:00:03.177) 0:00:22.366 ******** 2026-03-28 03:34:45.339455 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-28 03:34:45.339465 | orchestrator | 2026-03-28 03:34:45.339491 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-28 03:34:45.339501 | orchestrator | Saturday 28 March 2026 03:34:43 +0000 (0:00:03.781) 0:00:26.148 ******** 2026-03-28 03:34:45.339516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:45.339549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:45.339561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:45.339573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:45.339596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:45.339614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.122777 | orchestrator | 2026-03-28 03:34:49.122903 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-28 03:34:49.122922 | orchestrator | Saturday 28 March 2026 03:34:45 +0000 (0:00:01.324) 0:00:27.472 ******** 2026-03-28 03:34:49.122935 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:34:49.122946 | orchestrator | 2026-03-28 03:34:49.122958 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-28 03:34:49.122969 | orchestrator | Saturday 28 March 2026 03:34:46 +0000 (0:00:00.745) 0:00:28.218 ******** 2026-03-28 03:34:49.122983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:49.123264 | orchestrator | 2026-03-28 03:34:49.123276 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-28 03:34:49.123290 | orchestrator | Saturday 28 March 2026 03:34:48 +0000 (0:00:02.415) 0:00:30.633 ******** 2026-03-28 03:34:49.123310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:49.123326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:49.123339 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:34:49.123363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460227 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:34:50.460253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460270 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:34:50.460277 | orchestrator | 2026-03-28 03:34:50.460285 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-28 03:34:50.460293 | orchestrator | Saturday 28 March 2026 03:34:49 +0000 (0:00:00.627) 0:00:31.260 ******** 2026-03-28 03:34:50.460300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460334 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:34:50.460346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-28 03:34:50.460368 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:34:50.460386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-28 03:34:59.425151 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:34:59.425253 | orchestrator | 2026-03-28 03:34:59.425266 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-28 03:34:59.425278 | orchestrator | Saturday 28 March 2026 03:34:50 +0000 (0:00:01.329) 0:00:32.590 ******** 2026-03-28 03:34:59.425304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425409 | orchestrator | 2026-03-28 03:34:59.425418 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-28 03:34:59.425427 | orchestrator | Saturday 28 March 2026 03:34:53 +0000 (0:00:02.567) 0:00:35.157 ******** 2026-03-28 03:34:59.425436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-28 03:34:59.425445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-28 03:34:59.425454 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-28 03:34:59.425463 | orchestrator | 2026-03-28 03:34:59.425471 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-28 03:34:59.425480 | orchestrator | Saturday 28 March 2026 03:34:54 +0000 (0:00:01.675) 0:00:36.833 ******** 2026-03-28 03:34:59.425489 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-28 03:34:59.425497 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-28 03:34:59.425515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-28 03:34:59.425523 | orchestrator | 2026-03-28 03:34:59.425532 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-28 03:34:59.425541 | orchestrator | Saturday 28 March 2026 03:34:56 +0000 (0:00:02.199) 0:00:39.033 ******** 2026-03-28 03:34:59.425550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:34:59.425567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596535 | orchestrator | 2026-03-28 03:35:01.596547 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-28 03:35:01.596557 | orchestrator | Saturday 28 March 2026 03:34:59 +0000 (0:00:02.532) 0:00:41.565 ******** 2026-03-28 03:35:01.596566 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:35:01.596576 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:35:01.596585 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:35:01.596594 | orchestrator | 2026-03-28 03:35:01.596619 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-28 03:35:01.596629 | orchestrator | Saturday 28 March 2026 03:34:59 +0000 (0:00:00.340) 0:00:41.905 ******** 2026-03-28 03:35:01.596652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:01.596714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:41.169913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-28 03:35:41.170188 | orchestrator | 2026-03-28 03:35:41.170216 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-28 03:35:41.170233 | orchestrator | Saturday 28 March 2026 03:35:01 +0000 (0:00:01.821) 0:00:43.726 ******** 2026-03-28 03:35:41.170247 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:35:41.170263 | orchestrator | 2026-03-28 03:35:41.170277 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-28 03:35:41.170291 | orchestrator | Saturday 28 March 2026 03:35:03 +0000 (0:00:02.097) 0:00:45.824 ******** 2026-03-28 03:35:41.170305 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:35:41.170320 | orchestrator | 2026-03-28 03:35:41.170334 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-28 03:35:41.170348 | orchestrator | Saturday 28 March 2026 03:35:05 +0000 (0:00:02.175) 0:00:47.999 ******** 2026-03-28 03:35:41.170362 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:35:41.170375 | orchestrator | 2026-03-28 03:35:41.170388 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-28 03:35:41.170402 | orchestrator | Saturday 28 March 2026 03:35:13 +0000 (0:00:07.670) 0:00:55.670 ******** 2026-03-28 03:35:41.170415 | orchestrator | 2026-03-28 03:35:41.170432 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-28 03:35:41.170448 | orchestrator | Saturday 28 March 2026 03:35:13 +0000 (0:00:00.069) 0:00:55.739 ******** 2026-03-28 03:35:41.170464 | orchestrator | 2026-03-28 03:35:41.170482 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-28 03:35:41.170498 | orchestrator | Saturday 28 March 2026 03:35:13 +0000 (0:00:00.070) 0:00:55.810 ******** 2026-03-28 03:35:41.170512 | orchestrator | 2026-03-28 03:35:41.170528 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-28 03:35:41.170544 | orchestrator | Saturday 28 March 2026 03:35:13 +0000 (0:00:00.075) 0:00:55.886 ******** 2026-03-28 03:35:41.170560 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:35:41.170576 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:35:41.170592 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:35:41.170607 | orchestrator | 2026-03-28 03:35:41.170623 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-28 03:35:41.170639 | orchestrator | Saturday 28 March 2026 03:35:25 +0000 (0:00:11.832) 0:01:07.718 ******** 2026-03-28 03:35:41.170655 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:35:41.170667 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:35:41.170682 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:35:41.170696 | orchestrator | 2026-03-28 03:35:41.170709 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:35:41.170724 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 03:35:41.170739 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 03:35:41.170753 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 03:35:41.170767 | orchestrator | 2026-03-28 03:35:41.170778 | orchestrator | 2026-03-28 03:35:41.170790 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:35:41.170801 | orchestrator | Saturday 28 March 2026 03:35:40 +0000 (0:00:15.239) 0:01:22.957 ******** 2026-03-28 03:35:41.170812 | orchestrator | =============================================================================== 2026-03-28 03:35:41.170834 | orchestrator | skyline : Restart skyline-console container ---------------------------- 15.24s 2026-03-28 03:35:41.170845 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.83s 2026-03-28 03:35:41.170856 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.67s 2026-03-28 03:35:41.170866 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.76s 2026-03-28 03:35:41.170892 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.08s 2026-03-28 03:35:41.170904 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.78s 2026-03-28 03:35:41.170915 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.43s 2026-03-28 03:35:41.170926 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.22s 2026-03-28 03:35:41.170958 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.18s 2026-03-28 03:35:41.170969 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.57s 2026-03-28 03:35:41.170979 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.53s 2026-03-28 03:35:41.170989 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.42s 2026-03-28 03:35:41.171001 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.20s 2026-03-28 03:35:41.171012 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.18s 2026-03-28 03:35:41.171024 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.10s 2026-03-28 03:35:41.171036 | orchestrator | skyline : Check skyline container --------------------------------------- 1.82s 2026-03-28 03:35:41.171048 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.68s 2026-03-28 03:35:41.171059 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.33s 2026-03-28 03:35:41.171068 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.32s 2026-03-28 03:35:41.171080 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.75s 2026-03-28 03:35:43.598975 | orchestrator | 2026-03-28 03:35:43 | INFO  | Task 7e32be1e-8b46-4818-ab5c-f18665413bd4 (glance) was prepared for execution. 2026-03-28 03:35:43.599081 | orchestrator | 2026-03-28 03:35:43 | INFO  | It takes a moment until task 7e32be1e-8b46-4818-ab5c-f18665413bd4 (glance) has been started and output is visible here. 2026-03-28 03:36:17.878713 | orchestrator | 2026-03-28 03:36:17.878850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:36:17.878879 | orchestrator | 2026-03-28 03:36:17.878899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:36:17.878918 | orchestrator | Saturday 28 March 2026 03:35:47 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-28 03:36:17.878937 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:36:17.878956 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:36:17.878974 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:36:17.878993 | orchestrator | 2026-03-28 03:36:17.879013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:36:17.879031 | orchestrator | Saturday 28 March 2026 03:35:48 +0000 (0:00:00.355) 0:00:00.623 ******** 2026-03-28 03:36:17.879050 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-28 03:36:17.879071 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-28 03:36:17.879083 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-28 03:36:17.879093 | orchestrator | 2026-03-28 03:36:17.879104 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-28 03:36:17.879175 | orchestrator | 2026-03-28 03:36:17.879187 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 03:36:17.879198 | orchestrator | Saturday 28 March 2026 03:35:48 +0000 (0:00:00.459) 0:00:01.083 ******** 2026-03-28 03:36:17.879235 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:36:17.879250 | orchestrator | 2026-03-28 03:36:17.879263 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-28 03:36:17.879276 | orchestrator | Saturday 28 March 2026 03:35:49 +0000 (0:00:00.653) 0:00:01.736 ******** 2026-03-28 03:36:17.879288 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-28 03:36:17.879299 | orchestrator | 2026-03-28 03:36:17.879312 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-28 03:36:17.879325 | orchestrator | Saturday 28 March 2026 03:35:52 +0000 (0:00:03.398) 0:00:05.135 ******** 2026-03-28 03:36:17.879338 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-28 03:36:17.879350 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-28 03:36:17.879362 | orchestrator | 2026-03-28 03:36:17.879375 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-28 03:36:17.879387 | orchestrator | Saturday 28 March 2026 03:35:59 +0000 (0:00:06.515) 0:00:11.650 ******** 2026-03-28 03:36:17.879400 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:36:17.879414 | orchestrator | 2026-03-28 03:36:17.879426 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-28 03:36:17.879439 | orchestrator | Saturday 28 March 2026 03:36:02 +0000 (0:00:03.214) 0:00:14.865 ******** 2026-03-28 03:36:17.879451 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:36:17.879463 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-28 03:36:17.879476 | orchestrator | 2026-03-28 03:36:17.879488 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-28 03:36:17.879501 | orchestrator | Saturday 28 March 2026 03:36:06 +0000 (0:00:04.106) 0:00:18.972 ******** 2026-03-28 03:36:17.879513 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:36:17.879526 | orchestrator | 2026-03-28 03:36:17.879539 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-28 03:36:17.879551 | orchestrator | Saturday 28 March 2026 03:36:09 +0000 (0:00:03.316) 0:00:22.289 ******** 2026-03-28 03:36:17.879580 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-28 03:36:17.879593 | orchestrator | 2026-03-28 03:36:17.879604 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-28 03:36:17.879615 | orchestrator | Saturday 28 March 2026 03:36:13 +0000 (0:00:03.732) 0:00:26.021 ******** 2026-03-28 03:36:17.879661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:17.879701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:17.879732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:17.879753 | orchestrator | 2026-03-28 03:36:17.879771 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 03:36:17.879790 | orchestrator | Saturday 28 March 2026 03:36:17 +0000 (0:00:03.481) 0:00:29.503 ******** 2026-03-28 03:36:17.879811 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:36:17.879843 | orchestrator | 2026-03-28 03:36:17.879870 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-28 03:36:33.866484 | orchestrator | Saturday 28 March 2026 03:36:17 +0000 (0:00:00.713) 0:00:30.216 ******** 2026-03-28 03:36:33.866598 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:36:33.866614 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:36:33.866624 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:36:33.866635 | orchestrator | 2026-03-28 03:36:33.866646 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-28 03:36:33.866657 | orchestrator | Saturday 28 March 2026 03:36:21 +0000 (0:00:03.710) 0:00:33.927 ******** 2026-03-28 03:36:33.866667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866698 | orchestrator | 2026-03-28 03:36:33.866708 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-28 03:36:33.866717 | orchestrator | Saturday 28 March 2026 03:36:23 +0000 (0:00:01.568) 0:00:35.495 ******** 2026-03-28 03:36:33.866727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866737 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866747 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:36:33.866756 | orchestrator | 2026-03-28 03:36:33.866766 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-28 03:36:33.866776 | orchestrator | Saturday 28 March 2026 03:36:24 +0000 (0:00:01.488) 0:00:36.983 ******** 2026-03-28 03:36:33.866785 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:36:33.866796 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:36:33.866806 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:36:33.866815 | orchestrator | 2026-03-28 03:36:33.866825 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-28 03:36:33.866835 | orchestrator | Saturday 28 March 2026 03:36:25 +0000 (0:00:00.772) 0:00:37.756 ******** 2026-03-28 03:36:33.866844 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:36:33.866854 | orchestrator | 2026-03-28 03:36:33.866864 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-28 03:36:33.866874 | orchestrator | Saturday 28 March 2026 03:36:25 +0000 (0:00:00.153) 0:00:37.910 ******** 2026-03-28 03:36:33.866883 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:36:33.866893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:36:33.866903 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:36:33.866912 | orchestrator | 2026-03-28 03:36:33.866922 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 03:36:33.866932 | orchestrator | Saturday 28 March 2026 03:36:25 +0000 (0:00:00.376) 0:00:38.287 ******** 2026-03-28 03:36:33.866942 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:36:33.866952 | orchestrator | 2026-03-28 03:36:33.866962 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-28 03:36:33.866971 | orchestrator | Saturday 28 March 2026 03:36:26 +0000 (0:00:00.819) 0:00:39.106 ******** 2026-03-28 03:36:33.867004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:33.867059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:33.867080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:36:33.867100 | orchestrator | 2026-03-28 03:36:33.867203 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-28 03:36:33.867216 | orchestrator | Saturday 28 March 2026 03:36:30 +0000 (0:00:03.974) 0:00:43.081 ******** 2026-03-28 03:36:33.867240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:36:37.602228 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:36:37.602311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:36:37.602348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:36:37.602353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:36:37.602358 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:36:37.602362 | orchestrator | 2026-03-28 03:36:37.602367 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-28 03:36:37.602373 | orchestrator | Saturday 28 March 2026 03:36:33 +0000 (0:00:03.123) 0:00:46.205 ******** 2026-03-28 03:36:37.602388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:36:37.602398 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:36:37.602406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:36:37.602410 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:36:37.602419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 03:37:13.397208 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397330 | orchestrator | 2026-03-28 03:37:13.397347 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-28 03:37:13.397362 | orchestrator | Saturday 28 March 2026 03:36:37 +0000 (0:00:03.737) 0:00:49.942 ******** 2026-03-28 03:37:13.397373 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397418 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397428 | orchestrator | 2026-03-28 03:37:13.397439 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-28 03:37:13.397451 | orchestrator | Saturday 28 March 2026 03:36:40 +0000 (0:00:03.405) 0:00:53.348 ******** 2026-03-28 03:37:13.397481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:37:13.397498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:37:13.397538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:37:13.397560 | orchestrator | 2026-03-28 03:37:13.397572 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-28 03:37:13.397583 | orchestrator | Saturday 28 March 2026 03:36:44 +0000 (0:00:03.984) 0:00:57.332 ******** 2026-03-28 03:37:13.397594 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:37:13.397605 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:37:13.397616 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:37:13.397627 | orchestrator | 2026-03-28 03:37:13.397638 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-28 03:37:13.397650 | orchestrator | Saturday 28 March 2026 03:36:50 +0000 (0:00:05.827) 0:01:03.160 ******** 2026-03-28 03:37:13.397661 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397672 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397683 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397694 | orchestrator | 2026-03-28 03:37:13.397705 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-28 03:37:13.397718 | orchestrator | Saturday 28 March 2026 03:36:54 +0000 (0:00:03.454) 0:01:06.615 ******** 2026-03-28 03:37:13.397730 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397743 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397755 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397768 | orchestrator | 2026-03-28 03:37:13.397779 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-28 03:37:13.397791 | orchestrator | Saturday 28 March 2026 03:36:57 +0000 (0:00:03.476) 0:01:10.091 ******** 2026-03-28 03:37:13.397802 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397813 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397823 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397834 | orchestrator | 2026-03-28 03:37:13.397845 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-28 03:37:13.397856 | orchestrator | Saturday 28 March 2026 03:37:01 +0000 (0:00:03.463) 0:01:13.555 ******** 2026-03-28 03:37:13.397867 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397878 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397889 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397900 | orchestrator | 2026-03-28 03:37:13.397911 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-28 03:37:13.397922 | orchestrator | Saturday 28 March 2026 03:37:05 +0000 (0:00:03.992) 0:01:17.548 ******** 2026-03-28 03:37:13.397932 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.397943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.397961 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.397977 | orchestrator | 2026-03-28 03:37:13.397995 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-28 03:37:13.398014 | orchestrator | Saturday 28 March 2026 03:37:05 +0000 (0:00:00.573) 0:01:18.122 ******** 2026-03-28 03:37:13.398138 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 03:37:13.398159 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:37:13.398177 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 03:37:13.398197 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:37:13.398216 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 03:37:13.398234 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:37:13.398249 | orchestrator | 2026-03-28 03:37:13.398260 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-28 03:37:13.398271 | orchestrator | Saturday 28 March 2026 03:37:09 +0000 (0:00:03.284) 0:01:21.406 ******** 2026-03-28 03:37:13.398282 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:37:13.398292 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:37:13.398303 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:37:13.398314 | orchestrator | 2026-03-28 03:37:13.398325 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-28 03:37:13.398346 | orchestrator | Saturday 28 March 2026 03:37:13 +0000 (0:00:04.327) 0:01:25.733 ******** 2026-03-28 03:38:28.070332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:38:28.070413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:38:28.070449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 03:38:28.070455 | orchestrator | 2026-03-28 03:38:28.070460 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 03:38:28.070465 | orchestrator | Saturday 28 March 2026 03:37:17 +0000 (0:00:03.875) 0:01:29.608 ******** 2026-03-28 03:38:28.070469 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:38:28.070474 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:38:28.070478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:38:28.070482 | orchestrator | 2026-03-28 03:38:28.070486 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-28 03:38:28.070490 | orchestrator | Saturday 28 March 2026 03:37:17 +0000 (0:00:00.521) 0:01:30.130 ******** 2026-03-28 03:38:28.070494 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070498 | orchestrator | 2026-03-28 03:38:28.070502 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-28 03:38:28.070506 | orchestrator | Saturday 28 March 2026 03:37:19 +0000 (0:00:02.077) 0:01:32.207 ******** 2026-03-28 03:38:28.070509 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070514 | orchestrator | 2026-03-28 03:38:28.070518 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-28 03:38:28.070521 | orchestrator | Saturday 28 March 2026 03:37:22 +0000 (0:00:02.302) 0:01:34.510 ******** 2026-03-28 03:38:28.070526 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070533 | orchestrator | 2026-03-28 03:38:28.070537 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-28 03:38:28.070541 | orchestrator | Saturday 28 March 2026 03:37:24 +0000 (0:00:01.961) 0:01:36.471 ******** 2026-03-28 03:38:28.070545 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070551 | orchestrator | 2026-03-28 03:38:28.070557 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-28 03:38:28.070563 | orchestrator | Saturday 28 March 2026 03:37:52 +0000 (0:00:28.531) 0:02:05.003 ******** 2026-03-28 03:38:28.070570 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070576 | orchestrator | 2026-03-28 03:38:28.070582 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 03:38:28.070588 | orchestrator | Saturday 28 March 2026 03:37:54 +0000 (0:00:02.007) 0:02:07.010 ******** 2026-03-28 03:38:28.070593 | orchestrator | 2026-03-28 03:38:28.070599 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 03:38:28.070604 | orchestrator | Saturday 28 March 2026 03:37:54 +0000 (0:00:00.073) 0:02:07.084 ******** 2026-03-28 03:38:28.070610 | orchestrator | 2026-03-28 03:38:28.070617 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 03:38:28.070622 | orchestrator | Saturday 28 March 2026 03:37:54 +0000 (0:00:00.068) 0:02:07.153 ******** 2026-03-28 03:38:28.070628 | orchestrator | 2026-03-28 03:38:28.070634 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-28 03:38:28.070641 | orchestrator | Saturday 28 March 2026 03:37:54 +0000 (0:00:00.072) 0:02:07.226 ******** 2026-03-28 03:38:28.070647 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:38:28.070652 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:38:28.070658 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:38:28.070664 | orchestrator | 2026-03-28 03:38:28.070670 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:38:28.070677 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 03:38:28.070685 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 03:38:28.070691 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 03:38:28.070698 | orchestrator | 2026-03-28 03:38:28.070704 | orchestrator | 2026-03-28 03:38:28.070710 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:38:28.070716 | orchestrator | Saturday 28 March 2026 03:38:28 +0000 (0:00:33.165) 0:02:40.391 ******** 2026-03-28 03:38:28.070723 | orchestrator | =============================================================================== 2026-03-28 03:38:28.070729 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.17s 2026-03-28 03:38:28.070735 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.53s 2026-03-28 03:38:28.070740 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.52s 2026-03-28 03:38:28.070752 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.83s 2026-03-28 03:38:28.450284 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.33s 2026-03-28 03:38:28.450376 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.11s 2026-03-28 03:38:28.450387 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.99s 2026-03-28 03:38:28.450398 | orchestrator | glance : Copying over config.json files for services -------------------- 3.98s 2026-03-28 03:38:28.450407 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.97s 2026-03-28 03:38:28.450416 | orchestrator | glance : Check glance containers ---------------------------------------- 3.88s 2026-03-28 03:38:28.450444 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.74s 2026-03-28 03:38:28.450474 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.73s 2026-03-28 03:38:28.450483 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.71s 2026-03-28 03:38:28.450492 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.48s 2026-03-28 03:38:28.450501 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.48s 2026-03-28 03:38:28.450510 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.46s 2026-03-28 03:38:28.450519 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.45s 2026-03-28 03:38:28.450528 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.41s 2026-03-28 03:38:28.450537 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.40s 2026-03-28 03:38:28.450546 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.32s 2026-03-28 03:38:30.866871 | orchestrator | 2026-03-28 03:38:30 | INFO  | Task 08343b15-0903-443a-993e-77cfa94a91ae (cinder) was prepared for execution. 2026-03-28 03:38:30.866968 | orchestrator | 2026-03-28 03:38:30 | INFO  | It takes a moment until task 08343b15-0903-443a-993e-77cfa94a91ae (cinder) has been started and output is visible here. 2026-03-28 03:39:06.481062 | orchestrator | 2026-03-28 03:39:06.481215 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:39:06.481236 | orchestrator | 2026-03-28 03:39:06.481248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:39:06.481260 | orchestrator | Saturday 28 March 2026 03:38:35 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-03-28 03:39:06.481272 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:39:06.481286 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:39:06.481299 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:39:06.481311 | orchestrator | 2026-03-28 03:39:06.481323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:39:06.481335 | orchestrator | Saturday 28 March 2026 03:38:35 +0000 (0:00:00.345) 0:00:00.607 ******** 2026-03-28 03:39:06.481347 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-28 03:39:06.481358 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-28 03:39:06.481371 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-28 03:39:06.481383 | orchestrator | 2026-03-28 03:39:06.481395 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-28 03:39:06.481407 | orchestrator | 2026-03-28 03:39:06.481419 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 03:39:06.481431 | orchestrator | Saturday 28 March 2026 03:38:36 +0000 (0:00:00.462) 0:00:01.069 ******** 2026-03-28 03:39:06.481442 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:39:06.481455 | orchestrator | 2026-03-28 03:39:06.481467 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-28 03:39:06.481480 | orchestrator | Saturday 28 March 2026 03:38:36 +0000 (0:00:00.594) 0:00:01.664 ******** 2026-03-28 03:39:06.481492 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-28 03:39:06.481504 | orchestrator | 2026-03-28 03:39:06.481517 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-28 03:39:06.481530 | orchestrator | Saturday 28 March 2026 03:38:40 +0000 (0:00:03.596) 0:00:05.260 ******** 2026-03-28 03:39:06.481543 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-28 03:39:06.481556 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-28 03:39:06.481568 | orchestrator | 2026-03-28 03:39:06.481580 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-28 03:39:06.481618 | orchestrator | Saturday 28 March 2026 03:38:46 +0000 (0:00:06.411) 0:00:11.672 ******** 2026-03-28 03:39:06.481632 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:39:06.481645 | orchestrator | 2026-03-28 03:39:06.481659 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-28 03:39:06.481672 | orchestrator | Saturday 28 March 2026 03:38:49 +0000 (0:00:03.135) 0:00:14.808 ******** 2026-03-28 03:39:06.481685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:39:06.481699 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-28 03:39:06.481712 | orchestrator | 2026-03-28 03:39:06.481724 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-28 03:39:06.481737 | orchestrator | Saturday 28 March 2026 03:38:53 +0000 (0:00:04.027) 0:00:18.835 ******** 2026-03-28 03:39:06.481749 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:39:06.481762 | orchestrator | 2026-03-28 03:39:06.481774 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-28 03:39:06.481786 | orchestrator | Saturday 28 March 2026 03:38:57 +0000 (0:00:03.292) 0:00:22.128 ******** 2026-03-28 03:39:06.481798 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-28 03:39:06.481810 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-28 03:39:06.481822 | orchestrator | 2026-03-28 03:39:06.481835 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-28 03:39:06.481847 | orchestrator | Saturday 28 March 2026 03:39:04 +0000 (0:00:07.310) 0:00:29.439 ******** 2026-03-28 03:39:06.481877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:06.481912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:06.481925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:06.481946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:06.481959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:06.481977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:06.481991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:06.482011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:12.594975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:12.595089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:12.595099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:12.595167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:12.595175 | orchestrator | 2026-03-28 03:39:12.595183 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 03:39:12.595192 | orchestrator | Saturday 28 March 2026 03:39:06 +0000 (0:00:02.175) 0:00:31.614 ******** 2026-03-28 03:39:12.595198 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:12.595205 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:12.595211 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:12.595218 | orchestrator | 2026-03-28 03:39:12.595224 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 03:39:12.595231 | orchestrator | Saturday 28 March 2026 03:39:07 +0000 (0:00:00.532) 0:00:32.146 ******** 2026-03-28 03:39:12.595239 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:39:12.595245 | orchestrator | 2026-03-28 03:39:12.595252 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-28 03:39:12.595258 | orchestrator | Saturday 28 March 2026 03:39:07 +0000 (0:00:00.591) 0:00:32.738 ******** 2026-03-28 03:39:12.595265 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-28 03:39:12.595272 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-28 03:39:12.595278 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-28 03:39:12.595285 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-28 03:39:12.595299 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-28 03:39:12.595306 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-28 03:39:12.595312 | orchestrator | 2026-03-28 03:39:12.595318 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-28 03:39:12.595324 | orchestrator | Saturday 28 March 2026 03:39:09 +0000 (0:00:01.744) 0:00:34.483 ******** 2026-03-28 03:39:12.595365 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:12.595373 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:12.595385 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:12.595400 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:12.595412 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:23.964394 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-28 03:39:23.964509 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964517 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964538 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964544 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964580 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964584 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-28 03:39:23.964589 | orchestrator | 2026-03-28 03:39:23.964594 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-28 03:39:23.964600 | orchestrator | Saturday 28 March 2026 03:39:13 +0000 (0:00:03.570) 0:00:38.053 ******** 2026-03-28 03:39:23.964604 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:39:23.964610 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:39:23.964614 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-28 03:39:23.964618 | orchestrator | 2026-03-28 03:39:23.964622 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-28 03:39:23.964626 | orchestrator | Saturday 28 March 2026 03:39:14 +0000 (0:00:01.587) 0:00:39.640 ******** 2026-03-28 03:39:23.964630 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-28 03:39:23.964634 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-28 03:39:23.964638 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-28 03:39:23.964642 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:39:23.964646 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:39:23.964654 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 03:39:23.964658 | orchestrator | 2026-03-28 03:39:23.964663 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-28 03:39:23.964669 | orchestrator | Saturday 28 March 2026 03:39:17 +0000 (0:00:02.802) 0:00:42.443 ******** 2026-03-28 03:39:23.964677 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-28 03:39:23.964687 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-28 03:39:23.964700 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-28 03:39:23.964706 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-28 03:39:23.964712 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-28 03:39:23.964719 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-28 03:39:23.964724 | orchestrator | 2026-03-28 03:39:23.964730 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-28 03:39:23.964736 | orchestrator | Saturday 28 March 2026 03:39:18 +0000 (0:00:01.071) 0:00:43.515 ******** 2026-03-28 03:39:23.964742 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:23.964748 | orchestrator | 2026-03-28 03:39:23.964754 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-28 03:39:23.964760 | orchestrator | Saturday 28 March 2026 03:39:18 +0000 (0:00:00.143) 0:00:43.658 ******** 2026-03-28 03:39:23.964766 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:23.964772 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:23.964778 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:23.964783 | orchestrator | 2026-03-28 03:39:23.964789 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 03:39:23.964795 | orchestrator | Saturday 28 March 2026 03:39:19 +0000 (0:00:00.518) 0:00:44.176 ******** 2026-03-28 03:39:23.964802 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:39:23.964808 | orchestrator | 2026-03-28 03:39:23.964814 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-28 03:39:23.964820 | orchestrator | Saturday 28 March 2026 03:39:19 +0000 (0:00:00.630) 0:00:44.807 ******** 2026-03-28 03:39:23.964836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:25.015055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:25.015212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:25.015253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:25.015413 | orchestrator | 2026-03-28 03:39:25.015429 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-28 03:39:25.015445 | orchestrator | Saturday 28 March 2026 03:39:24 +0000 (0:00:04.297) 0:00:49.104 ******** 2026-03-28 03:39:25.015470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.128387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128538 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:25.128550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.128561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128619 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:25.128628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.128638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.128672 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:25.128682 | orchestrator | 2026-03-28 03:39:25.128692 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-28 03:39:25.128707 | orchestrator | Saturday 28 March 2026 03:39:25 +0000 (0:00:01.066) 0:00:50.170 ******** 2026-03-28 03:39:25.807657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.807762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807806 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:25.807819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.807881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807926 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:25.807938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:25.807950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:25.807978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:30.662449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:30.662528 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:30.662535 | orchestrator | 2026-03-28 03:39:30.662554 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-28 03:39:30.662560 | orchestrator | Saturday 28 March 2026 03:39:26 +0000 (0:00:00.990) 0:00:51.161 ******** 2026-03-28 03:39:30.662566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:30.662572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:30.662577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:30.662609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:30.662653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797741 | orchestrator | 2026-03-28 03:39:43.797751 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-28 03:39:43.797760 | orchestrator | Saturday 28 March 2026 03:39:30 +0000 (0:00:04.640) 0:00:55.801 ******** 2026-03-28 03:39:43.797768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 03:39:43.797776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 03:39:43.797783 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-28 03:39:43.797791 | orchestrator | 2026-03-28 03:39:43.797798 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-28 03:39:43.797806 | orchestrator | Saturday 28 March 2026 03:39:32 +0000 (0:00:01.988) 0:00:57.790 ******** 2026-03-28 03:39:43.797814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:43.797840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:43.797863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:43.797877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:43.797929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:46.300994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:46.301154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:46.301173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:46.301211 | orchestrator | 2026-03-28 03:39:46.301225 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-28 03:39:46.301238 | orchestrator | Saturday 28 March 2026 03:39:43 +0000 (0:00:11.150) 0:01:08.941 ******** 2026-03-28 03:39:46.301250 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:39:46.301262 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:39:46.301273 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:39:46.301285 | orchestrator | 2026-03-28 03:39:46.301296 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-28 03:39:46.301307 | orchestrator | Saturday 28 March 2026 03:39:45 +0000 (0:00:01.599) 0:01:10.540 ******** 2026-03-28 03:39:46.301319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:46.301334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:46.301373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:46.301387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:46.301407 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:46.301419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:46.301431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:46.301443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:46.301469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:50.030801 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:50.031006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-28 03:39:50.031076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:39:50.031093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 03:39:50.031144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 03:39:50.031158 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:50.031170 | orchestrator | 2026-03-28 03:39:50.031183 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-28 03:39:50.031197 | orchestrator | Saturday 28 March 2026 03:39:46 +0000 (0:00:00.900) 0:01:11.441 ******** 2026-03-28 03:39:50.031208 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:39:50.031219 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:39:50.031230 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:39:50.031240 | orchestrator | 2026-03-28 03:39:50.031252 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-28 03:39:50.031266 | orchestrator | Saturday 28 March 2026 03:39:46 +0000 (0:00:00.606) 0:01:12.048 ******** 2026-03-28 03:39:50.031324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:50.031352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:50.031366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-28 03:39:50.031379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:50.031393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:50.031423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:39:50.031457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 03:41:31.914734 | orchestrator | 2026-03-28 03:41:31.914740 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 03:41:31.914745 | orchestrator | Saturday 28 March 2026 03:39:50 +0000 (0:00:03.118) 0:01:15.166 ******** 2026-03-28 03:41:31.914749 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:41:31.914754 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:41:31.914758 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:41:31.914762 | orchestrator | 2026-03-28 03:41:31.914766 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-28 03:41:31.914770 | orchestrator | Saturday 28 March 2026 03:39:50 +0000 (0:00:00.317) 0:01:15.484 ******** 2026-03-28 03:41:31.914775 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914779 | orchestrator | 2026-03-28 03:41:31.914793 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-28 03:41:31.914797 | orchestrator | Saturday 28 March 2026 03:39:52 +0000 (0:00:02.101) 0:01:17.586 ******** 2026-03-28 03:41:31.914801 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914805 | orchestrator | 2026-03-28 03:41:31.914809 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-28 03:41:31.914813 | orchestrator | Saturday 28 March 2026 03:39:54 +0000 (0:00:02.171) 0:01:19.757 ******** 2026-03-28 03:41:31.914817 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914821 | orchestrator | 2026-03-28 03:41:31.914825 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 03:41:31.914829 | orchestrator | Saturday 28 March 2026 03:40:14 +0000 (0:00:20.249) 0:01:40.007 ******** 2026-03-28 03:41:31.914833 | orchestrator | 2026-03-28 03:41:31.914837 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 03:41:31.914841 | orchestrator | Saturday 28 March 2026 03:40:15 +0000 (0:00:00.069) 0:01:40.077 ******** 2026-03-28 03:41:31.914845 | orchestrator | 2026-03-28 03:41:31.914849 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 03:41:31.914853 | orchestrator | Saturday 28 March 2026 03:40:15 +0000 (0:00:00.069) 0:01:40.146 ******** 2026-03-28 03:41:31.914857 | orchestrator | 2026-03-28 03:41:31.914861 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-28 03:41:31.914865 | orchestrator | Saturday 28 March 2026 03:40:15 +0000 (0:00:00.072) 0:01:40.219 ******** 2026-03-28 03:41:31.914869 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914873 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:41:31.914877 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:41:31.914881 | orchestrator | 2026-03-28 03:41:31.914885 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-28 03:41:31.914889 | orchestrator | Saturday 28 March 2026 03:40:47 +0000 (0:00:32.722) 0:02:12.941 ******** 2026-03-28 03:41:31.914893 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914896 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:41:31.914900 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:41:31.914904 | orchestrator | 2026-03-28 03:41:31.914908 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-28 03:41:31.914912 | orchestrator | Saturday 28 March 2026 03:40:58 +0000 (0:00:10.252) 0:02:23.193 ******** 2026-03-28 03:41:31.914916 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914920 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:41:31.914924 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:41:31.914928 | orchestrator | 2026-03-28 03:41:31.914932 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-28 03:41:31.914936 | orchestrator | Saturday 28 March 2026 03:41:25 +0000 (0:00:27.368) 0:02:50.562 ******** 2026-03-28 03:41:31.914940 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:41:31.914944 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:41:31.914948 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:41:31.914956 | orchestrator | 2026-03-28 03:41:31.914961 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-28 03:41:31.914965 | orchestrator | Saturday 28 March 2026 03:41:31 +0000 (0:00:06.077) 0:02:56.639 ******** 2026-03-28 03:41:31.914969 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:41:31.914973 | orchestrator | 2026-03-28 03:41:31.914977 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:41:31.914982 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 03:41:31.914988 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 03:41:31.914992 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 03:41:31.914996 | orchestrator | 2026-03-28 03:41:31.915000 | orchestrator | 2026-03-28 03:41:31.915004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:41:31.915008 | orchestrator | Saturday 28 March 2026 03:41:31 +0000 (0:00:00.295) 0:02:56.935 ******** 2026-03-28 03:41:31.915012 | orchestrator | =============================================================================== 2026-03-28 03:41:31.915016 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.72s 2026-03-28 03:41:31.915020 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.37s 2026-03-28 03:41:31.915024 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.25s 2026-03-28 03:41:31.915028 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.15s 2026-03-28 03:41:31.915035 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.25s 2026-03-28 03:41:31.915039 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.31s 2026-03-28 03:41:31.915043 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.41s 2026-03-28 03:41:31.915047 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.08s 2026-03-28 03:41:31.915051 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.64s 2026-03-28 03:41:31.915055 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.30s 2026-03-28 03:41:31.915059 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.03s 2026-03-28 03:41:31.915063 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.60s 2026-03-28 03:41:31.915067 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.57s 2026-03-28 03:41:31.915071 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.29s 2026-03-28 03:41:31.915077 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.14s 2026-03-28 03:41:32.302537 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.12s 2026-03-28 03:41:32.302693 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.80s 2026-03-28 03:41:32.302713 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.18s 2026-03-28 03:41:32.302726 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.17s 2026-03-28 03:41:32.302737 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.10s 2026-03-28 03:41:34.857713 | orchestrator | 2026-03-28 03:41:34 | INFO  | Task c34b5e2e-f7fa-4a38-aa14-99759c594548 (barbican) was prepared for execution. 2026-03-28 03:41:34.857805 | orchestrator | 2026-03-28 03:41:34 | INFO  | It takes a moment until task c34b5e2e-f7fa-4a38-aa14-99759c594548 (barbican) has been started and output is visible here. 2026-03-28 03:42:19.743622 | orchestrator | 2026-03-28 03:42:19.743765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:42:19.743821 | orchestrator | 2026-03-28 03:42:19.743840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:42:19.743859 | orchestrator | Saturday 28 March 2026 03:41:39 +0000 (0:00:00.283) 0:00:00.283 ******** 2026-03-28 03:42:19.743876 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:42:19.743894 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:42:19.743910 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:42:19.743927 | orchestrator | 2026-03-28 03:42:19.743944 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:42:19.743960 | orchestrator | Saturday 28 March 2026 03:41:39 +0000 (0:00:00.367) 0:00:00.650 ******** 2026-03-28 03:42:19.743976 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-28 03:42:19.743994 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-28 03:42:19.744010 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-28 03:42:19.744026 | orchestrator | 2026-03-28 03:42:19.744043 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-28 03:42:19.744058 | orchestrator | 2026-03-28 03:42:19.744073 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 03:42:19.744090 | orchestrator | Saturday 28 March 2026 03:41:40 +0000 (0:00:00.475) 0:00:01.125 ******** 2026-03-28 03:42:19.744137 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:42:19.744156 | orchestrator | 2026-03-28 03:42:19.744172 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-28 03:42:19.744189 | orchestrator | Saturday 28 March 2026 03:41:40 +0000 (0:00:00.573) 0:00:01.698 ******** 2026-03-28 03:42:19.744206 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-28 03:42:19.744223 | orchestrator | 2026-03-28 03:42:19.744240 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-28 03:42:19.744257 | orchestrator | Saturday 28 March 2026 03:41:44 +0000 (0:00:03.546) 0:00:05.245 ******** 2026-03-28 03:42:19.744273 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-28 03:42:19.744291 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-28 03:42:19.744307 | orchestrator | 2026-03-28 03:42:19.744324 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-28 03:42:19.744340 | orchestrator | Saturday 28 March 2026 03:41:50 +0000 (0:00:06.768) 0:00:12.014 ******** 2026-03-28 03:42:19.744357 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:42:19.744374 | orchestrator | 2026-03-28 03:42:19.744393 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-28 03:42:19.744411 | orchestrator | Saturday 28 March 2026 03:41:54 +0000 (0:00:03.210) 0:00:15.224 ******** 2026-03-28 03:42:19.744427 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:42:19.744445 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-28 03:42:19.744461 | orchestrator | 2026-03-28 03:42:19.744477 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-28 03:42:19.744493 | orchestrator | Saturday 28 March 2026 03:41:58 +0000 (0:00:04.167) 0:00:19.391 ******** 2026-03-28 03:42:19.744510 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:42:19.744528 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-28 03:42:19.744545 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-28 03:42:19.744581 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-28 03:42:19.744592 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-28 03:42:19.744602 | orchestrator | 2026-03-28 03:42:19.744612 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-28 03:42:19.744621 | orchestrator | Saturday 28 March 2026 03:42:14 +0000 (0:00:15.701) 0:00:35.093 ******** 2026-03-28 03:42:19.744643 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-28 03:42:19.744653 | orchestrator | 2026-03-28 03:42:19.744663 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-28 03:42:19.744672 | orchestrator | Saturday 28 March 2026 03:42:17 +0000 (0:00:03.970) 0:00:39.063 ******** 2026-03-28 03:42:19.744687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:19.744724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:19.744736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:19.744747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:19.744764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:19.744786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:19.744805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.807889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.807968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.807976 | orchestrator | 2026-03-28 03:42:25.807982 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-28 03:42:25.807989 | orchestrator | Saturday 28 March 2026 03:42:19 +0000 (0:00:01.724) 0:00:40.787 ******** 2026-03-28 03:42:25.807995 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-28 03:42:25.808000 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-28 03:42:25.808005 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-28 03:42:25.808009 | orchestrator | 2026-03-28 03:42:25.808014 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-28 03:42:25.808019 | orchestrator | Saturday 28 March 2026 03:42:20 +0000 (0:00:01.247) 0:00:42.035 ******** 2026-03-28 03:42:25.808024 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:25.808028 | orchestrator | 2026-03-28 03:42:25.808033 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-28 03:42:25.808056 | orchestrator | Saturday 28 March 2026 03:42:21 +0000 (0:00:00.361) 0:00:42.397 ******** 2026-03-28 03:42:25.808061 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:25.808066 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:42:25.808070 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:42:25.808075 | orchestrator | 2026-03-28 03:42:25.808079 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 03:42:25.808084 | orchestrator | Saturday 28 March 2026 03:42:21 +0000 (0:00:00.339) 0:00:42.736 ******** 2026-03-28 03:42:25.808137 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:42:25.808143 | orchestrator | 2026-03-28 03:42:25.808148 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-28 03:42:25.808154 | orchestrator | Saturday 28 March 2026 03:42:22 +0000 (0:00:00.571) 0:00:43.308 ******** 2026-03-28 03:42:25.808163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:25.808187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:25.808195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:25.808203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.808221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.808229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.808236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:25.808249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:26.831840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:26.831909 | orchestrator | 2026-03-28 03:42:26.831916 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-28 03:42:26.831921 | orchestrator | Saturday 28 March 2026 03:42:25 +0000 (0:00:03.543) 0:00:46.851 ******** 2026-03-28 03:42:26.831943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:26.831960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.831965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.831970 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:26.831975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:26.831988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.831993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.832002 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:42:26.832009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:26.832013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.832017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:26.832021 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:42:26.832025 | orchestrator | 2026-03-28 03:42:26.832029 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-28 03:42:26.832033 | orchestrator | Saturday 28 March 2026 03:42:26 +0000 (0:00:00.693) 0:00:47.545 ******** 2026-03-28 03:42:26.832041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:31.198747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.198882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.198910 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:31.198955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:31.198979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.198998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.199062 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:42:31.199178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:31.199222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.199246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:31.199260 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:42:31.199273 | orchestrator | 2026-03-28 03:42:31.199287 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-28 03:42:31.199302 | orchestrator | Saturday 28 March 2026 03:42:27 +0000 (0:00:01.068) 0:00:48.613 ******** 2026-03-28 03:42:31.199316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:31.199329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:31.199383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:41.023934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.024199 | orchestrator | 2026-03-28 03:42:41.024213 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-28 03:42:41.024226 | orchestrator | Saturday 28 March 2026 03:42:31 +0000 (0:00:03.632) 0:00:52.246 ******** 2026-03-28 03:42:41.024238 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:42:41.024251 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:42:41.024262 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:42:41.024273 | orchestrator | 2026-03-28 03:42:41.024301 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-28 03:42:41.024313 | orchestrator | Saturday 28 March 2026 03:42:32 +0000 (0:00:01.581) 0:00:53.828 ******** 2026-03-28 03:42:41.024325 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:42:41.024336 | orchestrator | 2026-03-28 03:42:41.024347 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-28 03:42:41.024358 | orchestrator | Saturday 28 March 2026 03:42:33 +0000 (0:00:01.026) 0:00:54.854 ******** 2026-03-28 03:42:41.024369 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:41.024380 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:42:41.024391 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:42:41.024402 | orchestrator | 2026-03-28 03:42:41.024415 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-28 03:42:41.024433 | orchestrator | Saturday 28 March 2026 03:42:34 +0000 (0:00:00.649) 0:00:55.503 ******** 2026-03-28 03:42:41.024578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:41.024613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:41.024640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:41.024665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:41.941735 | orchestrator | 2026-03-28 03:42:41.941748 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-28 03:42:41.941761 | orchestrator | Saturday 28 March 2026 03:42:41 +0000 (0:00:06.576) 0:01:02.080 ******** 2026-03-28 03:42:41.941791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:41.941810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:41.941823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:41.941834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:42:41.941847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:41.941871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:41.941883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:41.941894 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:42:41.941914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-28 03:42:44.286388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:42:44.286473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:42:44.286505 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:42:44.286516 | orchestrator | 2026-03-28 03:42:44.286525 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-28 03:42:44.286533 | orchestrator | Saturday 28 March 2026 03:42:41 +0000 (0:00:00.911) 0:01:02.992 ******** 2026-03-28 03:42:44.286542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:44.286551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:44.286574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-28 03:42:44.286588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:42:44.286641 | orchestrator | 2026-03-28 03:42:44.286649 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 03:42:44.286661 | orchestrator | Saturday 28 March 2026 03:42:44 +0000 (0:00:02.339) 0:01:05.331 ******** 2026-03-28 03:43:28.619495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:43:28.619618 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:43:28.619643 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:43:28.619663 | orchestrator | 2026-03-28 03:43:28.619705 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-28 03:43:28.619758 | orchestrator | Saturday 28 March 2026 03:42:44 +0000 (0:00:00.339) 0:01:05.670 ******** 2026-03-28 03:43:28.619779 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.619799 | orchestrator | 2026-03-28 03:43:28.619814 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-28 03:43:28.619825 | orchestrator | Saturday 28 March 2026 03:42:46 +0000 (0:00:02.225) 0:01:07.895 ******** 2026-03-28 03:43:28.619836 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.619847 | orchestrator | 2026-03-28 03:43:28.619858 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-28 03:43:28.619868 | orchestrator | Saturday 28 March 2026 03:42:49 +0000 (0:00:02.224) 0:01:10.119 ******** 2026-03-28 03:43:28.619879 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.619890 | orchestrator | 2026-03-28 03:43:28.619901 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 03:43:28.619912 | orchestrator | Saturday 28 March 2026 03:43:01 +0000 (0:00:12.410) 0:01:22.530 ******** 2026-03-28 03:43:28.619922 | orchestrator | 2026-03-28 03:43:28.619933 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 03:43:28.619944 | orchestrator | Saturday 28 March 2026 03:43:01 +0000 (0:00:00.080) 0:01:22.610 ******** 2026-03-28 03:43:28.619955 | orchestrator | 2026-03-28 03:43:28.619966 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 03:43:28.619977 | orchestrator | Saturday 28 March 2026 03:43:01 +0000 (0:00:00.074) 0:01:22.685 ******** 2026-03-28 03:43:28.619987 | orchestrator | 2026-03-28 03:43:28.619998 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-28 03:43:28.620009 | orchestrator | Saturday 28 March 2026 03:43:01 +0000 (0:00:00.082) 0:01:22.768 ******** 2026-03-28 03:43:28.620020 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.620033 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:43:28.620045 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:43:28.620058 | orchestrator | 2026-03-28 03:43:28.620070 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-28 03:43:28.620083 | orchestrator | Saturday 28 March 2026 03:43:13 +0000 (0:00:11.441) 0:01:34.209 ******** 2026-03-28 03:43:28.620095 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.620136 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:43:28.620151 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:43:28.620163 | orchestrator | 2026-03-28 03:43:28.620175 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-28 03:43:28.620188 | orchestrator | Saturday 28 March 2026 03:43:22 +0000 (0:00:09.830) 0:01:44.040 ******** 2026-03-28 03:43:28.620200 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:43:28.620212 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:43:28.620224 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:43:28.620237 | orchestrator | 2026-03-28 03:43:28.620249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:43:28.620262 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 03:43:28.620283 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:43:28.620302 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:43:28.620322 | orchestrator | 2026-03-28 03:43:28.620340 | orchestrator | 2026-03-28 03:43:28.620359 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:43:28.620379 | orchestrator | Saturday 28 March 2026 03:43:28 +0000 (0:00:05.272) 0:01:49.312 ******** 2026-03-28 03:43:28.620398 | orchestrator | =============================================================================== 2026-03-28 03:43:28.620417 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.70s 2026-03-28 03:43:28.620443 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.41s 2026-03-28 03:43:28.620454 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.44s 2026-03-28 03:43:28.620465 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.83s 2026-03-28 03:43:28.620476 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.77s 2026-03-28 03:43:28.620486 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.58s 2026-03-28 03:43:28.620512 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.27s 2026-03-28 03:43:28.620534 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.17s 2026-03-28 03:43:28.620546 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.97s 2026-03-28 03:43:28.620557 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.63s 2026-03-28 03:43:28.620568 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.55s 2026-03-28 03:43:28.620578 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.54s 2026-03-28 03:43:28.620589 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2026-03-28 03:43:28.620600 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.34s 2026-03-28 03:43:28.620612 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.23s 2026-03-28 03:43:28.620643 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.22s 2026-03-28 03:43:28.620654 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.72s 2026-03-28 03:43:28.620672 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.58s 2026-03-28 03:43:28.620683 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.25s 2026-03-28 03:43:28.620694 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.07s 2026-03-28 03:43:31.055461 | orchestrator | 2026-03-28 03:43:31 | INFO  | Task 794cacb1-0cc1-4848-af80-c3a6112793c8 (designate) was prepared for execution. 2026-03-28 03:43:31.055573 | orchestrator | 2026-03-28 03:43:31 | INFO  | It takes a moment until task 794cacb1-0cc1-4848-af80-c3a6112793c8 (designate) has been started and output is visible here. 2026-03-28 03:44:03.108801 | orchestrator | 2026-03-28 03:44:03.108882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:44:03.108890 | orchestrator | 2026-03-28 03:44:03.108895 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:44:03.108900 | orchestrator | Saturday 28 March 2026 03:43:35 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-28 03:44:03.108904 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:44:03.108909 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:44:03.108913 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:44:03.108917 | orchestrator | 2026-03-28 03:44:03.108921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:44:03.108925 | orchestrator | Saturday 28 March 2026 03:43:35 +0000 (0:00:00.352) 0:00:00.624 ******** 2026-03-28 03:44:03.108930 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-28 03:44:03.108934 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-28 03:44:03.108938 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-28 03:44:03.108942 | orchestrator | 2026-03-28 03:44:03.108946 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-28 03:44:03.108950 | orchestrator | 2026-03-28 03:44:03.108954 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 03:44:03.108958 | orchestrator | Saturday 28 March 2026 03:43:36 +0000 (0:00:00.473) 0:00:01.098 ******** 2026-03-28 03:44:03.108962 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:44:03.108980 | orchestrator | 2026-03-28 03:44:03.108985 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-28 03:44:03.108989 | orchestrator | Saturday 28 March 2026 03:43:36 +0000 (0:00:00.635) 0:00:01.733 ******** 2026-03-28 03:44:03.108992 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-28 03:44:03.108996 | orchestrator | 2026-03-28 03:44:03.109000 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-28 03:44:03.109004 | orchestrator | Saturday 28 March 2026 03:43:40 +0000 (0:00:03.314) 0:00:05.048 ******** 2026-03-28 03:44:03.109008 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-28 03:44:03.109012 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-28 03:44:03.109016 | orchestrator | 2026-03-28 03:44:03.109020 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-28 03:44:03.109024 | orchestrator | Saturday 28 March 2026 03:43:46 +0000 (0:00:06.566) 0:00:11.615 ******** 2026-03-28 03:44:03.109028 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:44:03.109032 | orchestrator | 2026-03-28 03:44:03.109036 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-28 03:44:03.109039 | orchestrator | Saturday 28 March 2026 03:43:49 +0000 (0:00:03.205) 0:00:14.820 ******** 2026-03-28 03:44:03.109043 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:44:03.109047 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-28 03:44:03.109051 | orchestrator | 2026-03-28 03:44:03.109055 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-28 03:44:03.109059 | orchestrator | Saturday 28 March 2026 03:43:53 +0000 (0:00:04.018) 0:00:18.838 ******** 2026-03-28 03:44:03.109063 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:44:03.109067 | orchestrator | 2026-03-28 03:44:03.109070 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-28 03:44:03.109075 | orchestrator | Saturday 28 March 2026 03:43:57 +0000 (0:00:03.260) 0:00:22.099 ******** 2026-03-28 03:44:03.109081 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-28 03:44:03.109087 | orchestrator | 2026-03-28 03:44:03.109093 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-28 03:44:03.109099 | orchestrator | Saturday 28 March 2026 03:44:00 +0000 (0:00:03.783) 0:00:25.882 ******** 2026-03-28 03:44:03.109155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:03.109181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:03.109194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:03.109201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:03.109209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:03.109216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:03.109226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:03.109239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.445998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.446010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:09.446090 | orchestrator | 2026-03-28 03:44:09.446105 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-28 03:44:09.446201 | orchestrator | Saturday 28 March 2026 03:44:03 +0000 (0:00:02.978) 0:00:28.861 ******** 2026-03-28 03:44:09.446214 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:09.446226 | orchestrator | 2026-03-28 03:44:09.446239 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-28 03:44:09.446253 | orchestrator | Saturday 28 March 2026 03:44:04 +0000 (0:00:00.143) 0:00:29.004 ******** 2026-03-28 03:44:09.446266 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:09.446279 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:44:09.446293 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:44:09.446305 | orchestrator | 2026-03-28 03:44:09.446318 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 03:44:09.446331 | orchestrator | Saturday 28 March 2026 03:44:04 +0000 (0:00:00.554) 0:00:29.559 ******** 2026-03-28 03:44:09.446345 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:44:09.446358 | orchestrator | 2026-03-28 03:44:09.446371 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-28 03:44:09.446395 | orchestrator | Saturday 28 March 2026 03:44:05 +0000 (0:00:00.564) 0:00:30.123 ******** 2026-03-28 03:44:09.446417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:09.446443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:11.280699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:11.280808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.280989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.281008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.281025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.281054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.281065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:11.281086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:12.219687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:12.219783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:12.219799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:12.219835 | orchestrator | 2026-03-28 03:44:12.219849 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-28 03:44:12.219860 | orchestrator | Saturday 28 March 2026 03:44:11 +0000 (0:00:06.058) 0:00:36.182 ******** 2026-03-28 03:44:12.219886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:12.219898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:12.219924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:12.219936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:12.219946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:12.219957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:12.219976 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:12.219993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:12.220004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:12.220014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:12.220031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.028808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.028930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.028945 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:44:13.028973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:13.028985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:13.028995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.029006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.029033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.029055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.029066 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:44:13.029076 | orchestrator | 2026-03-28 03:44:13.029087 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-28 03:44:13.029098 | orchestrator | Saturday 28 March 2026 03:44:12 +0000 (0:00:01.050) 0:00:37.232 ******** 2026-03-28 03:44:13.029184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:13.029200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:13.029210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.029227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400465 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:13.400497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:13.400510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:13.400523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400619 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:44:13.400636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:13.400648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:13.400660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:13.400700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:17.936323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:17.936436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:44:17.936453 | orchestrator | 2026-03-28 03:44:17.936466 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-28 03:44:17.936479 | orchestrator | Saturday 28 March 2026 03:44:13 +0000 (0:00:01.071) 0:00:38.303 ******** 2026-03-28 03:44:17.936509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:17.936522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:17.936535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:17.936587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:17.936698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.812763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.812918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.812940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.812953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.812987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.813001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.813031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:29.813044 | orchestrator | 2026-03-28 03:44:29.813058 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-28 03:44:29.813070 | orchestrator | Saturday 28 March 2026 03:44:19 +0000 (0:00:06.421) 0:00:44.725 ******** 2026-03-28 03:44:29.813089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:29.813103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:29.813185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:29.813199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:29.813222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.213985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:38.214277 | orchestrator | 2026-03-28 03:44:38.214291 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-28 03:44:38.214304 | orchestrator | Saturday 28 March 2026 03:44:34 +0000 (0:00:14.632) 0:00:59.358 ******** 2026-03-28 03:44:38.214328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 03:44:42.582989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 03:44:42.583075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 03:44:42.583086 | orchestrator | 2026-03-28 03:44:42.583096 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-28 03:44:42.583104 | orchestrator | Saturday 28 March 2026 03:44:38 +0000 (0:00:03.756) 0:01:03.114 ******** 2026-03-28 03:44:42.583112 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 03:44:42.583166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 03:44:42.583174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 03:44:42.583181 | orchestrator | 2026-03-28 03:44:42.583188 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-28 03:44:42.583210 | orchestrator | Saturday 28 March 2026 03:44:40 +0000 (0:00:02.459) 0:01:05.574 ******** 2026-03-28 03:44:42.583221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:42.583251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:42.583259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:42.583280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:42.583289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:42.583302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:42.583322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:42.583333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:42.583342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:42.583351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:42.583367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:45.659581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:45.659695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:45.659707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:45.659715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:45.659722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:45.659730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:45.659752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:45.659765 | orchestrator | 2026-03-28 03:44:45.659774 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-28 03:44:45.659782 | orchestrator | Saturday 28 March 2026 03:44:43 +0000 (0:00:03.074) 0:01:08.649 ******** 2026-03-28 03:44:45.659795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:45.659803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:45.659809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:45.659816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:45.659827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:46.711519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:46.711631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:46.711691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:46.711711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:46.711724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:46.711746 | orchestrator | 2026-03-28 03:44:46.711760 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 03:44:46.711782 | orchestrator | Saturday 28 March 2026 03:44:46 +0000 (0:00:02.958) 0:01:11.608 ******** 2026-03-28 03:44:47.660227 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:47.660297 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:44:47.660304 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:44:47.660311 | orchestrator | 2026-03-28 03:44:47.660318 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-28 03:44:47.660325 | orchestrator | Saturday 28 March 2026 03:44:47 +0000 (0:00:00.311) 0:01:11.920 ******** 2026-03-28 03:44:47.660345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:47.660354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:47.660361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:47.660423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:47.660428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:47.660433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:47.660456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:51.190877 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:44:51.191055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-28 03:44:51.191212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 03:44:51.191238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 03:44:51.191258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 03:44:51.191303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 03:44:51.191322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:44:51.191340 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:44:51.191358 | orchestrator | 2026-03-28 03:44:51.191399 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-28 03:44:51.191419 | orchestrator | Saturday 28 March 2026 03:44:47 +0000 (0:00:00.757) 0:01:12.677 ******** 2026-03-28 03:44:51.191447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:51.191467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:51.191486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-28 03:44:51.191516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:51.191545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.146991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:44:53.147070 | orchestrator | 2026-03-28 03:44:53.147083 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 03:44:53.147095 | orchestrator | Saturday 28 March 2026 03:44:52 +0000 (0:00:04.818) 0:01:17.496 ******** 2026-03-28 03:44:53.147106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:44:53.147154 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:46:05.134660 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:46:05.134770 | orchestrator | 2026-03-28 03:46:05.134784 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-28 03:46:05.134810 | orchestrator | Saturday 28 March 2026 03:44:53 +0000 (0:00:00.557) 0:01:18.053 ******** 2026-03-28 03:46:05.134820 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-28 03:46:05.134830 | orchestrator | 2026-03-28 03:46:05.134839 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-28 03:46:05.134848 | orchestrator | Saturday 28 March 2026 03:44:55 +0000 (0:00:02.192) 0:01:20.246 ******** 2026-03-28 03:46:05.134857 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 03:46:05.134867 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-28 03:46:05.134875 | orchestrator | 2026-03-28 03:46:05.134884 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-28 03:46:05.134893 | orchestrator | Saturday 28 March 2026 03:44:57 +0000 (0:00:02.260) 0:01:22.506 ******** 2026-03-28 03:46:05.134902 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.134910 | orchestrator | 2026-03-28 03:46:05.134919 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 03:46:05.134928 | orchestrator | Saturday 28 March 2026 03:45:13 +0000 (0:00:16.119) 0:01:38.626 ******** 2026-03-28 03:46:05.134936 | orchestrator | 2026-03-28 03:46:05.134945 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 03:46:05.134954 | orchestrator | Saturday 28 March 2026 03:45:13 +0000 (0:00:00.081) 0:01:38.708 ******** 2026-03-28 03:46:05.134962 | orchestrator | 2026-03-28 03:46:05.134991 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 03:46:05.135001 | orchestrator | Saturday 28 March 2026 03:45:13 +0000 (0:00:00.080) 0:01:38.788 ******** 2026-03-28 03:46:05.135010 | orchestrator | 2026-03-28 03:46:05.135018 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-28 03:46:05.135027 | orchestrator | Saturday 28 March 2026 03:45:13 +0000 (0:00:00.081) 0:01:38.870 ******** 2026-03-28 03:46:05.135036 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135045 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135054 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135062 | orchestrator | 2026-03-28 03:46:05.135071 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-28 03:46:05.135080 | orchestrator | Saturday 28 March 2026 03:45:22 +0000 (0:00:08.078) 0:01:46.948 ******** 2026-03-28 03:46:05.135088 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135097 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135105 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135114 | orchestrator | 2026-03-28 03:46:05.135122 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-28 03:46:05.135177 | orchestrator | Saturday 28 March 2026 03:45:27 +0000 (0:00:05.769) 0:01:52.717 ******** 2026-03-28 03:46:05.135186 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135195 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135205 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135214 | orchestrator | 2026-03-28 03:46:05.135224 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-28 03:46:05.135234 | orchestrator | Saturday 28 March 2026 03:45:38 +0000 (0:00:10.948) 0:02:03.666 ******** 2026-03-28 03:46:05.135244 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135254 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135264 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135275 | orchestrator | 2026-03-28 03:46:05.135285 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-28 03:46:05.135295 | orchestrator | Saturday 28 March 2026 03:45:44 +0000 (0:00:06.060) 0:02:09.726 ******** 2026-03-28 03:46:05.135305 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135315 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135324 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135333 | orchestrator | 2026-03-28 03:46:05.135342 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-28 03:46:05.135351 | orchestrator | Saturday 28 March 2026 03:45:50 +0000 (0:00:06.046) 0:02:15.773 ******** 2026-03-28 03:46:05.135359 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135368 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:46:05.135376 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:46:05.135385 | orchestrator | 2026-03-28 03:46:05.135393 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-28 03:46:05.135402 | orchestrator | Saturday 28 March 2026 03:45:57 +0000 (0:00:06.321) 0:02:22.095 ******** 2026-03-28 03:46:05.135411 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:46:05.135419 | orchestrator | 2026-03-28 03:46:05.135428 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:46:05.135438 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 03:46:05.135448 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:46:05.135457 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:46:05.135466 | orchestrator | 2026-03-28 03:46:05.135475 | orchestrator | 2026-03-28 03:46:05.135483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:46:05.135500 | orchestrator | Saturday 28 March 2026 03:46:04 +0000 (0:00:07.503) 0:02:29.598 ******** 2026-03-28 03:46:05.135508 | orchestrator | =============================================================================== 2026-03-28 03:46:05.135517 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.12s 2026-03-28 03:46:05.135526 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.63s 2026-03-28 03:46:05.135550 | orchestrator | designate : Restart designate-central container ------------------------ 10.95s 2026-03-28 03:46:05.135559 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.08s 2026-03-28 03:46:05.135573 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.50s 2026-03-28 03:46:05.135583 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.57s 2026-03-28 03:46:05.135591 | orchestrator | designate : Copying over config.json files for services ----------------- 6.42s 2026-03-28 03:46:05.135600 | orchestrator | designate : Restart designate-worker container -------------------------- 6.32s 2026-03-28 03:46:05.135609 | orchestrator | designate : Restart designate-producer container ------------------------ 6.06s 2026-03-28 03:46:05.135618 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.06s 2026-03-28 03:46:05.135626 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.05s 2026-03-28 03:46:05.135635 | orchestrator | designate : Restart designate-api container ----------------------------- 5.77s 2026-03-28 03:46:05.135643 | orchestrator | designate : Check designate containers ---------------------------------- 4.82s 2026-03-28 03:46:05.135652 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.02s 2026-03-28 03:46:05.135660 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.78s 2026-03-28 03:46:05.135669 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.76s 2026-03-28 03:46:05.135678 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.31s 2026-03-28 03:46:05.135687 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.26s 2026-03-28 03:46:05.135695 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.21s 2026-03-28 03:46:05.135704 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.07s 2026-03-28 03:46:07.578580 | orchestrator | 2026-03-28 03:46:07 | INFO  | Task a534dd3c-6811-4f39-a201-55dfbc2c9ebd (octavia) was prepared for execution. 2026-03-28 03:46:07.578666 | orchestrator | 2026-03-28 03:46:07 | INFO  | It takes a moment until task a534dd3c-6811-4f39-a201-55dfbc2c9ebd (octavia) has been started and output is visible here. 2026-03-28 03:48:14.607399 | orchestrator | 2026-03-28 03:48:14.607480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:48:14.607488 | orchestrator | 2026-03-28 03:48:14.607494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:48:14.607499 | orchestrator | Saturday 28 March 2026 03:46:12 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-03-28 03:48:14.607504 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:14.607509 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:48:14.607514 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:48:14.607518 | orchestrator | 2026-03-28 03:48:14.607522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:48:14.607527 | orchestrator | Saturday 28 March 2026 03:46:12 +0000 (0:00:00.340) 0:00:00.623 ******** 2026-03-28 03:48:14.607531 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-28 03:48:14.607537 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-28 03:48:14.607541 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-28 03:48:14.607545 | orchestrator | 2026-03-28 03:48:14.607550 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-28 03:48:14.607555 | orchestrator | 2026-03-28 03:48:14.607559 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:48:14.607579 | orchestrator | Saturday 28 March 2026 03:46:12 +0000 (0:00:00.452) 0:00:01.076 ******** 2026-03-28 03:48:14.607585 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:48:14.607590 | orchestrator | 2026-03-28 03:48:14.607595 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-28 03:48:14.607599 | orchestrator | Saturday 28 March 2026 03:46:13 +0000 (0:00:00.581) 0:00:01.657 ******** 2026-03-28 03:48:14.607604 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-28 03:48:14.607608 | orchestrator | 2026-03-28 03:48:14.607613 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-28 03:48:14.607617 | orchestrator | Saturday 28 March 2026 03:46:17 +0000 (0:00:03.653) 0:00:05.310 ******** 2026-03-28 03:48:14.607621 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-28 03:48:14.607626 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-28 03:48:14.607630 | orchestrator | 2026-03-28 03:48:14.607635 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-28 03:48:14.607639 | orchestrator | Saturday 28 March 2026 03:46:23 +0000 (0:00:06.513) 0:00:11.824 ******** 2026-03-28 03:48:14.607643 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:48:14.607648 | orchestrator | 2026-03-28 03:48:14.607653 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-28 03:48:14.607657 | orchestrator | Saturday 28 March 2026 03:46:26 +0000 (0:00:03.217) 0:00:15.041 ******** 2026-03-28 03:48:14.607661 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:48:14.607666 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 03:48:14.607671 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 03:48:14.607675 | orchestrator | 2026-03-28 03:48:14.607679 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-28 03:48:14.607684 | orchestrator | Saturday 28 March 2026 03:46:34 +0000 (0:00:08.123) 0:00:23.165 ******** 2026-03-28 03:48:14.607688 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:48:14.607693 | orchestrator | 2026-03-28 03:48:14.607697 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-28 03:48:14.607711 | orchestrator | Saturday 28 March 2026 03:46:38 +0000 (0:00:03.257) 0:00:26.423 ******** 2026-03-28 03:48:14.607716 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 03:48:14.607720 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 03:48:14.607725 | orchestrator | 2026-03-28 03:48:14.607729 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-28 03:48:14.607733 | orchestrator | Saturday 28 March 2026 03:46:45 +0000 (0:00:07.293) 0:00:33.716 ******** 2026-03-28 03:48:14.607738 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-28 03:48:14.607742 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-28 03:48:14.607746 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-28 03:48:14.607750 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-28 03:48:14.607755 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-28 03:48:14.607759 | orchestrator | 2026-03-28 03:48:14.607767 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:48:14.607774 | orchestrator | Saturday 28 March 2026 03:47:01 +0000 (0:00:15.655) 0:00:49.372 ******** 2026-03-28 03:48:14.607786 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:48:14.607793 | orchestrator | 2026-03-28 03:48:14.607800 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-28 03:48:14.607813 | orchestrator | Saturday 28 March 2026 03:47:01 +0000 (0:00:00.777) 0:00:50.150 ******** 2026-03-28 03:48:14.607820 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.607827 | orchestrator | 2026-03-28 03:48:14.607834 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-28 03:48:14.607841 | orchestrator | Saturday 28 March 2026 03:47:07 +0000 (0:00:05.223) 0:00:55.373 ******** 2026-03-28 03:48:14.607848 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.607855 | orchestrator | 2026-03-28 03:48:14.607863 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 03:48:14.607886 | orchestrator | Saturday 28 March 2026 03:47:11 +0000 (0:00:03.949) 0:00:59.322 ******** 2026-03-28 03:48:14.607894 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:14.607900 | orchestrator | 2026-03-28 03:48:14.607904 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-28 03:48:14.607909 | orchestrator | Saturday 28 March 2026 03:47:14 +0000 (0:00:03.062) 0:01:02.385 ******** 2026-03-28 03:48:14.607913 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 03:48:14.607917 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 03:48:14.607922 | orchestrator | 2026-03-28 03:48:14.607926 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-28 03:48:14.607930 | orchestrator | Saturday 28 March 2026 03:47:24 +0000 (0:00:10.544) 0:01:12.929 ******** 2026-03-28 03:48:14.607935 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-28 03:48:14.607939 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-28 03:48:14.607945 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-28 03:48:14.607951 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-28 03:48:14.607955 | orchestrator | 2026-03-28 03:48:14.607960 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-28 03:48:14.607965 | orchestrator | Saturday 28 March 2026 03:47:39 +0000 (0:00:15.106) 0:01:28.036 ******** 2026-03-28 03:48:14.607973 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.607978 | orchestrator | 2026-03-28 03:48:14.607983 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-28 03:48:14.607988 | orchestrator | Saturday 28 March 2026 03:47:44 +0000 (0:00:05.022) 0:01:33.059 ******** 2026-03-28 03:48:14.607993 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.607998 | orchestrator | 2026-03-28 03:48:14.608003 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-28 03:48:14.608008 | orchestrator | Saturday 28 March 2026 03:47:50 +0000 (0:00:05.276) 0:01:38.336 ******** 2026-03-28 03:48:14.608013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:14.608018 | orchestrator | 2026-03-28 03:48:14.608023 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-28 03:48:14.608028 | orchestrator | Saturday 28 March 2026 03:47:50 +0000 (0:00:00.262) 0:01:38.598 ******** 2026-03-28 03:48:14.608033 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:14.608039 | orchestrator | 2026-03-28 03:48:14.608044 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:48:14.608049 | orchestrator | Saturday 28 March 2026 03:47:54 +0000 (0:00:04.402) 0:01:43.001 ******** 2026-03-28 03:48:14.608054 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:48:14.608059 | orchestrator | 2026-03-28 03:48:14.608064 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-28 03:48:14.608069 | orchestrator | Saturday 28 March 2026 03:47:55 +0000 (0:00:01.155) 0:01:44.156 ******** 2026-03-28 03:48:14.608082 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608088 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608093 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608098 | orchestrator | 2026-03-28 03:48:14.608103 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-28 03:48:14.608112 | orchestrator | Saturday 28 March 2026 03:48:01 +0000 (0:00:05.799) 0:01:49.956 ******** 2026-03-28 03:48:14.608116 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608121 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608125 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608129 | orchestrator | 2026-03-28 03:48:14.608134 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-28 03:48:14.608138 | orchestrator | Saturday 28 March 2026 03:48:06 +0000 (0:00:04.827) 0:01:54.784 ******** 2026-03-28 03:48:14.608143 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608169 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608178 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608185 | orchestrator | 2026-03-28 03:48:14.608192 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-28 03:48:14.608199 | orchestrator | Saturday 28 March 2026 03:48:07 +0000 (0:00:01.075) 0:01:55.859 ******** 2026-03-28 03:48:14.608205 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:48:14.608210 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:14.608214 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:48:14.608218 | orchestrator | 2026-03-28 03:48:14.608222 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-28 03:48:14.608227 | orchestrator | Saturday 28 March 2026 03:48:09 +0000 (0:00:01.998) 0:01:57.858 ******** 2026-03-28 03:48:14.608231 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608235 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608240 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608244 | orchestrator | 2026-03-28 03:48:14.608248 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-28 03:48:14.608252 | orchestrator | Saturday 28 March 2026 03:48:11 +0000 (0:00:01.390) 0:01:59.249 ******** 2026-03-28 03:48:14.608257 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608261 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608265 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608269 | orchestrator | 2026-03-28 03:48:14.608274 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-28 03:48:14.608278 | orchestrator | Saturday 28 March 2026 03:48:12 +0000 (0:00:01.307) 0:02:00.556 ******** 2026-03-28 03:48:14.608282 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:14.608287 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:14.608291 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:14.608295 | orchestrator | 2026-03-28 03:48:14.608303 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-28 03:48:41.852787 | orchestrator | Saturday 28 March 2026 03:48:14 +0000 (0:00:02.212) 0:02:02.769 ******** 2026-03-28 03:48:41.852900 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:48:41.852920 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:48:41.852934 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:48:41.852949 | orchestrator | 2026-03-28 03:48:41.852965 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-28 03:48:41.852979 | orchestrator | Saturday 28 March 2026 03:48:16 +0000 (0:00:01.639) 0:02:04.408 ******** 2026-03-28 03:48:41.852992 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853006 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:48:41.853019 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:48:41.853031 | orchestrator | 2026-03-28 03:48:41.853044 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-28 03:48:41.853057 | orchestrator | Saturday 28 March 2026 03:48:16 +0000 (0:00:00.672) 0:02:05.081 ******** 2026-03-28 03:48:41.853070 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:48:41.853111 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853125 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:48:41.853139 | orchestrator | 2026-03-28 03:48:41.853178 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:48:41.853192 | orchestrator | Saturday 28 March 2026 03:48:21 +0000 (0:00:04.136) 0:02:09.218 ******** 2026-03-28 03:48:41.853207 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:48:41.853220 | orchestrator | 2026-03-28 03:48:41.853234 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-28 03:48:41.853246 | orchestrator | Saturday 28 March 2026 03:48:21 +0000 (0:00:00.577) 0:02:09.795 ******** 2026-03-28 03:48:41.853259 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853272 | orchestrator | 2026-03-28 03:48:41.853285 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 03:48:41.853298 | orchestrator | Saturday 28 March 2026 03:48:25 +0000 (0:00:04.218) 0:02:14.013 ******** 2026-03-28 03:48:41.853312 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853325 | orchestrator | 2026-03-28 03:48:41.853338 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-28 03:48:41.853350 | orchestrator | Saturday 28 March 2026 03:48:29 +0000 (0:00:03.182) 0:02:17.195 ******** 2026-03-28 03:48:41.853362 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 03:48:41.853374 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 03:48:41.853387 | orchestrator | 2026-03-28 03:48:41.853400 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-28 03:48:41.853414 | orchestrator | Saturday 28 March 2026 03:48:35 +0000 (0:00:06.901) 0:02:24.097 ******** 2026-03-28 03:48:41.853428 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853442 | orchestrator | 2026-03-28 03:48:41.853455 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-28 03:48:41.853468 | orchestrator | Saturday 28 March 2026 03:48:39 +0000 (0:00:03.370) 0:02:27.468 ******** 2026-03-28 03:48:41.853481 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:48:41.853494 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:48:41.853507 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:48:41.853520 | orchestrator | 2026-03-28 03:48:41.853534 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-28 03:48:41.853547 | orchestrator | Saturday 28 March 2026 03:48:39 +0000 (0:00:00.539) 0:02:28.007 ******** 2026-03-28 03:48:41.853582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:41.853621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:41.853648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:41.853663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:41.853678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:41.853696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:41.853713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:41.853728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:41.853762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:43.335867 | orchestrator | 2026-03-28 03:48:43.335881 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-28 03:48:43.335894 | orchestrator | Saturday 28 March 2026 03:48:42 +0000 (0:00:02.481) 0:02:30.489 ******** 2026-03-28 03:48:43.335905 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:43.335918 | orchestrator | 2026-03-28 03:48:43.335930 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-28 03:48:43.335941 | orchestrator | Saturday 28 March 2026 03:48:42 +0000 (0:00:00.134) 0:02:30.624 ******** 2026-03-28 03:48:43.335952 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:43.335980 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:48:43.335992 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:48:43.336003 | orchestrator | 2026-03-28 03:48:43.336015 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-28 03:48:43.336026 | orchestrator | Saturday 28 March 2026 03:48:42 +0000 (0:00:00.327) 0:02:30.951 ******** 2026-03-28 03:48:43.336039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:43.336053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:43.336071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:43.336085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:43.336104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:43.336116 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:43.336136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:48.564326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:48.564461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:48.564505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:48.564524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:48.564574 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:48:48.564593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:48.564608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:48.564646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:48.564656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:48.564670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:48.564686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:48:48.564695 | orchestrator | 2026-03-28 03:48:48.564729 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:48:48.564740 | orchestrator | Saturday 28 March 2026 03:48:43 +0000 (0:00:00.655) 0:02:31.607 ******** 2026-03-28 03:48:48.564749 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:48:48.564757 | orchestrator | 2026-03-28 03:48:48.564765 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-28 03:48:48.564773 | orchestrator | Saturday 28 March 2026 03:48:44 +0000 (0:00:00.811) 0:02:32.418 ******** 2026-03-28 03:48:48.564782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:48.564791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:48.564808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:50.182616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:50.182742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:50.182757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:50.182768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:48:50.182884 | orchestrator | 2026-03-28 03:48:50.182895 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-28 03:48:50.182905 | orchestrator | Saturday 28 March 2026 03:48:49 +0000 (0:00:05.355) 0:02:37.774 ******** 2026-03-28 03:48:50.182922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:50.292412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:50.292527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:50.292545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:50.292557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:50.292570 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:50.292584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:50.292596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:50.292644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:50.292661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:50.292671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:50.292682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:48:50.292692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:50.292703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:50.292731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:50.292769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:51.142502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:51.142593 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:48:51.142604 | orchestrator | 2026-03-28 03:48:51.142613 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-28 03:48:51.142622 | orchestrator | Saturday 28 March 2026 03:48:50 +0000 (0:00:00.691) 0:02:38.466 ******** 2026-03-28 03:48:51.142630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:51.142640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:51.142648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:51.142657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:51.142704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:51.142712 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:48:51.142724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:51.142732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:51.142739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:51.142746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:51.142758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:51.142765 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:48:51.142777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 03:48:56.191567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 03:48:56.191674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 03:48:56.191691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 03:48:56.191704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 03:48:56.191740 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:48:56.191754 | orchestrator | 2026-03-28 03:48:56.191767 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-28 03:48:56.191780 | orchestrator | Saturday 28 March 2026 03:48:51 +0000 (0:00:01.371) 0:02:39.837 ******** 2026-03-28 03:48:56.191792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:56.191828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:56.191842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:48:56.191853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:56.191865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:56.191885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:48:56.191897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:48:56.191920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:13.101729 | orchestrator | 2026-03-28 03:49:13.101734 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-28 03:49:13.101739 | orchestrator | Saturday 28 March 2026 03:48:57 +0000 (0:00:05.589) 0:02:45.427 ******** 2026-03-28 03:49:13.101743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 03:49:13.101749 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 03:49:13.101752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 03:49:13.101756 | orchestrator | 2026-03-28 03:49:13.101760 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-28 03:49:13.101764 | orchestrator | Saturday 28 March 2026 03:48:58 +0000 (0:00:01.710) 0:02:47.137 ******** 2026-03-28 03:49:13.101768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:13.101778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:13.101782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:13.101793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:28.857658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:28.857739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:28.857747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:49:28.857832 | orchestrator | 2026-03-28 03:49:28.857837 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-28 03:49:28.857842 | orchestrator | Saturday 28 March 2026 03:49:16 +0000 (0:00:17.612) 0:03:04.750 ******** 2026-03-28 03:49:28.857847 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:49:28.857852 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:49:28.857856 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:49:28.857860 | orchestrator | 2026-03-28 03:49:28.857864 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-28 03:49:28.857868 | orchestrator | Saturday 28 March 2026 03:49:18 +0000 (0:00:01.817) 0:03:06.567 ******** 2026-03-28 03:49:28.857872 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 03:49:28.857876 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 03:49:28.857879 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 03:49:28.857883 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 03:49:28.857887 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 03:49:28.857891 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 03:49:28.857895 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 03:49:28.857899 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 03:49:28.857905 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 03:49:28.857912 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 03:49:28.857918 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 03:49:28.857924 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 03:49:28.857930 | orchestrator | 2026-03-28 03:49:28.857936 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-28 03:49:28.857946 | orchestrator | Saturday 28 March 2026 03:49:23 +0000 (0:00:05.178) 0:03:11.746 ******** 2026-03-28 03:49:28.857951 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 03:49:28.857957 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 03:49:28.857967 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 03:49:37.850581 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850684 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850699 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850711 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850722 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850733 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850743 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850754 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850765 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850776 | orchestrator | 2026-03-28 03:49:37.850789 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-28 03:49:37.850801 | orchestrator | Saturday 28 March 2026 03:49:28 +0000 (0:00:05.278) 0:03:17.025 ******** 2026-03-28 03:49:37.850812 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 03:49:37.850823 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 03:49:37.850834 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 03:49:37.850844 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850855 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850866 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 03:49:37.850877 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850888 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850898 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 03:49:37.850909 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850920 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850930 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 03:49:37.850941 | orchestrator | 2026-03-28 03:49:37.850952 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-28 03:49:37.850963 | orchestrator | Saturday 28 March 2026 03:49:34 +0000 (0:00:05.618) 0:03:22.643 ******** 2026-03-28 03:49:37.850978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:37.850993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:37.851066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 03:49:37.851082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:37.851096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:37.851107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 03:49:37.851121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:37.851135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:37.851186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 03:49:37.851209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 03:50:58.322829 | orchestrator | 2026-03-28 03:50:58.322837 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 03:50:58.322847 | orchestrator | Saturday 28 March 2026 03:49:38 +0000 (0:00:04.059) 0:03:26.703 ******** 2026-03-28 03:50:58.322855 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:50:58.322863 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:50:58.322869 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:50:58.322875 | orchestrator | 2026-03-28 03:50:58.322896 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-28 03:50:58.322902 | orchestrator | Saturday 28 March 2026 03:49:39 +0000 (0:00:00.573) 0:03:27.276 ******** 2026-03-28 03:50:58.322908 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.322915 | orchestrator | 2026-03-28 03:50:58.322921 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-28 03:50:58.322927 | orchestrator | Saturday 28 March 2026 03:49:41 +0000 (0:00:02.200) 0:03:29.476 ******** 2026-03-28 03:50:58.322933 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.322939 | orchestrator | 2026-03-28 03:50:58.322946 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-28 03:50:58.322952 | orchestrator | Saturday 28 March 2026 03:49:43 +0000 (0:00:02.144) 0:03:31.621 ******** 2026-03-28 03:50:58.322959 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.322965 | orchestrator | 2026-03-28 03:50:58.322971 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-28 03:50:58.322981 | orchestrator | Saturday 28 March 2026 03:49:45 +0000 (0:00:02.186) 0:03:33.807 ******** 2026-03-28 03:50:58.323002 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323009 | orchestrator | 2026-03-28 03:50:58.323015 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-28 03:50:58.323021 | orchestrator | Saturday 28 March 2026 03:49:47 +0000 (0:00:02.221) 0:03:36.029 ******** 2026-03-28 03:50:58.323027 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323033 | orchestrator | 2026-03-28 03:50:58.323039 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 03:50:58.323045 | orchestrator | Saturday 28 March 2026 03:50:11 +0000 (0:00:23.416) 0:03:59.446 ******** 2026-03-28 03:50:58.323051 | orchestrator | 2026-03-28 03:50:58.323057 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 03:50:58.323063 | orchestrator | Saturday 28 March 2026 03:50:11 +0000 (0:00:00.087) 0:03:59.533 ******** 2026-03-28 03:50:58.323069 | orchestrator | 2026-03-28 03:50:58.323074 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 03:50:58.323079 | orchestrator | Saturday 28 March 2026 03:50:11 +0000 (0:00:00.072) 0:03:59.606 ******** 2026-03-28 03:50:58.323084 | orchestrator | 2026-03-28 03:50:58.323090 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-28 03:50:58.323096 | orchestrator | Saturday 28 March 2026 03:50:11 +0000 (0:00:00.068) 0:03:59.674 ******** 2026-03-28 03:50:58.323101 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323107 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:50:58.323113 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:50:58.323119 | orchestrator | 2026-03-28 03:50:58.323124 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-28 03:50:58.323130 | orchestrator | Saturday 28 March 2026 03:50:24 +0000 (0:00:12.976) 0:04:12.650 ******** 2026-03-28 03:50:58.323142 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323148 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:50:58.323153 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:50:58.323158 | orchestrator | 2026-03-28 03:50:58.323185 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-28 03:50:58.323192 | orchestrator | Saturday 28 March 2026 03:50:31 +0000 (0:00:06.711) 0:04:19.362 ******** 2026-03-28 03:50:58.323200 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323209 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:50:58.323216 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:50:58.323222 | orchestrator | 2026-03-28 03:50:58.323229 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-28 03:50:58.323235 | orchestrator | Saturday 28 March 2026 03:50:36 +0000 (0:00:05.652) 0:04:25.015 ******** 2026-03-28 03:50:58.323241 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323247 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:50:58.323254 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:50:58.323260 | orchestrator | 2026-03-28 03:50:58.323266 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-28 03:50:58.323272 | orchestrator | Saturday 28 March 2026 03:50:47 +0000 (0:00:10.560) 0:04:35.575 ******** 2026-03-28 03:50:58.323278 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:50:58.323284 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:50:58.323291 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:50:58.323297 | orchestrator | 2026-03-28 03:50:58.323303 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:50:58.323311 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 03:50:58.323319 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:50:58.323326 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:50:58.323332 | orchestrator | 2026-03-28 03:50:58.323337 | orchestrator | 2026-03-28 03:50:58.323344 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:50:58.323349 | orchestrator | Saturday 28 March 2026 03:50:58 +0000 (0:00:10.886) 0:04:46.462 ******** 2026-03-28 03:50:58.323356 | orchestrator | =============================================================================== 2026-03-28 03:50:58.323362 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.42s 2026-03-28 03:50:58.323369 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.61s 2026-03-28 03:50:58.323377 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.66s 2026-03-28 03:50:58.323386 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.11s 2026-03-28 03:50:58.323393 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.98s 2026-03-28 03:50:58.323399 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.89s 2026-03-28 03:50:58.323414 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.56s 2026-03-28 03:50:58.323420 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.54s 2026-03-28 03:50:58.323427 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.12s 2026-03-28 03:50:58.323433 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.29s 2026-03-28 03:50:58.323438 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.90s 2026-03-28 03:50:58.323444 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.71s 2026-03-28 03:50:58.323450 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.51s 2026-03-28 03:50:58.323464 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.80s 2026-03-28 03:50:58.323480 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.65s 2026-03-28 03:50:58.693698 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.62s 2026-03-28 03:50:58.693787 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.59s 2026-03-28 03:50:58.693798 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.36s 2026-03-28 03:50:58.693804 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.28s 2026-03-28 03:50:58.693810 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.28s 2026-03-28 03:51:01.183683 | orchestrator | 2026-03-28 03:51:01 | INFO  | Task 8e3f8957-1172-45fa-90cf-b481960b8a41 (ceilometer) was prepared for execution. 2026-03-28 03:51:01.183791 | orchestrator | 2026-03-28 03:51:01 | INFO  | It takes a moment until task 8e3f8957-1172-45fa-90cf-b481960b8a41 (ceilometer) has been started and output is visible here. 2026-03-28 03:51:25.142759 | orchestrator | 2026-03-28 03:51:25.142895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:51:25.142920 | orchestrator | 2026-03-28 03:51:25.142940 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:51:25.142959 | orchestrator | Saturday 28 March 2026 03:51:05 +0000 (0:00:00.274) 0:00:00.275 ******** 2026-03-28 03:51:25.142977 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:51:25.142996 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:51:25.143015 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:51:25.143033 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:51:25.143051 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:51:25.143070 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:51:25.143088 | orchestrator | 2026-03-28 03:51:25.143106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:51:25.143124 | orchestrator | Saturday 28 March 2026 03:51:06 +0000 (0:00:00.785) 0:00:01.060 ******** 2026-03-28 03:51:25.143141 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143160 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143210 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143228 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143247 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143264 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-28 03:51:25.143283 | orchestrator | 2026-03-28 03:51:25.143301 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-28 03:51:25.143320 | orchestrator | 2026-03-28 03:51:25.143339 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-28 03:51:25.143358 | orchestrator | Saturday 28 March 2026 03:51:06 +0000 (0:00:00.656) 0:00:01.716 ******** 2026-03-28 03:51:25.143378 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:51:25.143399 | orchestrator | 2026-03-28 03:51:25.143424 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-28 03:51:25.143443 | orchestrator | Saturday 28 March 2026 03:51:08 +0000 (0:00:01.301) 0:00:03.018 ******** 2026-03-28 03:51:25.143463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:25.143482 | orchestrator | 2026-03-28 03:51:25.143502 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-28 03:51:25.143522 | orchestrator | Saturday 28 March 2026 03:51:08 +0000 (0:00:00.123) 0:00:03.142 ******** 2026-03-28 03:51:25.143541 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:25.143560 | orchestrator | 2026-03-28 03:51:25.143579 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-28 03:51:25.143598 | orchestrator | Saturday 28 March 2026 03:51:08 +0000 (0:00:00.168) 0:00:03.310 ******** 2026-03-28 03:51:25.143648 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:51:25.143667 | orchestrator | 2026-03-28 03:51:25.143685 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-28 03:51:25.143704 | orchestrator | Saturday 28 March 2026 03:51:12 +0000 (0:00:03.763) 0:00:07.074 ******** 2026-03-28 03:51:25.143722 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:51:25.143739 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-28 03:51:25.143757 | orchestrator | 2026-03-28 03:51:25.143776 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-28 03:51:25.143794 | orchestrator | Saturday 28 March 2026 03:51:16 +0000 (0:00:04.021) 0:00:11.095 ******** 2026-03-28 03:51:25.143813 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:51:25.143831 | orchestrator | 2026-03-28 03:51:25.143848 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-28 03:51:25.143885 | orchestrator | Saturday 28 March 2026 03:51:19 +0000 (0:00:03.240) 0:00:14.336 ******** 2026-03-28 03:51:25.143903 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-28 03:51:25.143919 | orchestrator | 2026-03-28 03:51:25.143938 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-28 03:51:25.143957 | orchestrator | Saturday 28 March 2026 03:51:23 +0000 (0:00:03.943) 0:00:18.279 ******** 2026-03-28 03:51:25.143975 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:25.143993 | orchestrator | 2026-03-28 03:51:25.144010 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-28 03:51:25.144028 | orchestrator | Saturday 28 March 2026 03:51:23 +0000 (0:00:00.139) 0:00:18.419 ******** 2026-03-28 03:51:25.144051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:25.144100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:25.144120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:25.144139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:25.144217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:25.144241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:25.144261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:25.144291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:30.098077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:30.098251 | orchestrator | 2026-03-28 03:51:30.098275 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-28 03:51:30.098308 | orchestrator | Saturday 28 March 2026 03:51:25 +0000 (0:00:01.509) 0:00:19.929 ******** 2026-03-28 03:51:30.098318 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:51:30.098328 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:51:30.098337 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:51:30.098346 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:51:30.098355 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:51:30.098363 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:51:30.098372 | orchestrator | 2026-03-28 03:51:30.098381 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-28 03:51:30.098390 | orchestrator | Saturday 28 March 2026 03:51:26 +0000 (0:00:01.660) 0:00:21.589 ******** 2026-03-28 03:51:30.098399 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:51:30.098409 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:51:30.098417 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:51:30.098426 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:51:30.098434 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:51:30.098443 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:51:30.098451 | orchestrator | 2026-03-28 03:51:30.098460 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-28 03:51:30.098469 | orchestrator | Saturday 28 March 2026 03:51:27 +0000 (0:00:00.663) 0:00:22.253 ******** 2026-03-28 03:51:30.098478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:30.098486 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:30.098498 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:30.098513 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:30.098538 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:30.098553 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:30.098568 | orchestrator | 2026-03-28 03:51:30.098584 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-28 03:51:30.098600 | orchestrator | Saturday 28 March 2026 03:51:28 +0000 (0:00:00.856) 0:00:23.110 ******** 2026-03-28 03:51:30.098616 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:51:30.098627 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:51:30.098638 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:51:30.098648 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:51:30.098659 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:51:30.098727 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:51:30.098745 | orchestrator | 2026-03-28 03:51:30.098761 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-28 03:51:30.098775 | orchestrator | Saturday 28 March 2026 03:51:28 +0000 (0:00:00.629) 0:00:23.739 ******** 2026-03-28 03:51:30.098799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:30.098819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:30.098852 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:30.098884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:30.098896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:30.098908 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:30.098919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:30.098929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:30.098943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:30.098953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:30.098963 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:30.098972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:30.098988 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:30.099004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.247720 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:35.247860 | orchestrator | 2026-03-28 03:51:35.247882 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-28 03:51:35.247897 | orchestrator | Saturday 28 March 2026 03:51:30 +0000 (0:00:01.150) 0:00:24.889 ******** 2026-03-28 03:51:35.247912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.247926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:35.247939 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:35.247970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.247983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:35.248016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:35.248029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.248041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:35.248053 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:35.248086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.248099 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:35.248110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.248121 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:35.248139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:35.248152 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:35.248163 | orchestrator | 2026-03-28 03:51:35.248202 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-28 03:51:35.248227 | orchestrator | Saturday 28 March 2026 03:51:31 +0000 (0:00:00.960) 0:00:25.849 ******** 2026-03-28 03:51:35.248239 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:51:35.248249 | orchestrator | 2026-03-28 03:51:35.248259 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-28 03:51:35.248271 | orchestrator | Saturday 28 March 2026 03:51:31 +0000 (0:00:00.769) 0:00:26.619 ******** 2026-03-28 03:51:35.248283 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:51:35.248296 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:51:35.248307 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:51:35.248318 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:51:35.248329 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:51:35.248340 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:51:35.248351 | orchestrator | 2026-03-28 03:51:35.248363 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-28 03:51:35.248375 | orchestrator | Saturday 28 March 2026 03:51:32 +0000 (0:00:00.851) 0:00:27.471 ******** 2026-03-28 03:51:35.248386 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:51:35.248396 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:51:35.248406 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:51:35.248417 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:51:35.248429 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:51:35.248440 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:51:35.248450 | orchestrator | 2026-03-28 03:51:35.248460 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-28 03:51:35.248471 | orchestrator | Saturday 28 March 2026 03:51:33 +0000 (0:00:01.002) 0:00:28.473 ******** 2026-03-28 03:51:35.248482 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:35.248493 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:35.248504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:35.248515 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:35.248526 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:35.248537 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:35.248548 | orchestrator | 2026-03-28 03:51:35.248558 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-28 03:51:35.248569 | orchestrator | Saturday 28 March 2026 03:51:34 +0000 (0:00:00.891) 0:00:29.365 ******** 2026-03-28 03:51:35.248580 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:35.248590 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:35.248602 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:35.248613 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:35.248624 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:35.248634 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:35.248645 | orchestrator | 2026-03-28 03:51:40.772571 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-28 03:51:40.772673 | orchestrator | Saturday 28 March 2026 03:51:35 +0000 (0:00:00.676) 0:00:30.041 ******** 2026-03-28 03:51:40.772695 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:51:40.772708 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:51:40.772719 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:51:40.772731 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:51:40.772743 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:51:40.772755 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:51:40.772767 | orchestrator | 2026-03-28 03:51:40.772779 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-28 03:51:40.772792 | orchestrator | Saturday 28 March 2026 03:51:37 +0000 (0:00:01.850) 0:00:31.891 ******** 2026-03-28 03:51:40.772807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.772854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:40.772869 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:40.772898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.772907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:40.772915 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:40.772922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.772948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:40.772956 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:40.772965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.773005 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:40.773013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.773020 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:40.773032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:40.773040 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:40.773048 | orchestrator | 2026-03-28 03:51:40.773056 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-28 03:51:40.773063 | orchestrator | Saturday 28 March 2026 03:51:37 +0000 (0:00:00.853) 0:00:32.744 ******** 2026-03-28 03:51:40.773071 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:40.773078 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:40.773085 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:40.773092 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:40.773099 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:40.773107 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:40.773116 | orchestrator | 2026-03-28 03:51:40.773124 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-28 03:51:40.773132 | orchestrator | Saturday 28 March 2026 03:51:38 +0000 (0:00:00.960) 0:00:33.705 ******** 2026-03-28 03:51:40.773141 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:51:40.773150 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:51:40.773158 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:51:40.773166 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:51:40.773207 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:51:40.773215 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:51:40.773223 | orchestrator | 2026-03-28 03:51:40.773232 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-28 03:51:40.773240 | orchestrator | Saturday 28 March 2026 03:51:40 +0000 (0:00:01.385) 0:00:35.090 ******** 2026-03-28 03:51:40.773256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:46.823387 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:46.823407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:46.823453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:46.823477 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:46.823489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823524 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:46.823536 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:46.823567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823579 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:46.823590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:46.823602 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:46.823613 | orchestrator | 2026-03-28 03:51:46.823626 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-28 03:51:46.823639 | orchestrator | Saturday 28 March 2026 03:51:41 +0000 (0:00:01.175) 0:00:36.266 ******** 2026-03-28 03:51:46.823651 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:46.823664 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:46.823674 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:46.823685 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:46.823696 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:46.823716 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:46.823728 | orchestrator | 2026-03-28 03:51:46.823740 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-28 03:51:46.823753 | orchestrator | Saturday 28 March 2026 03:51:42 +0000 (0:00:00.766) 0:00:37.033 ******** 2026-03-28 03:51:46.823765 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:46.823778 | orchestrator | 2026-03-28 03:51:46.823792 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-28 03:51:46.823805 | orchestrator | Saturday 28 March 2026 03:51:42 +0000 (0:00:00.151) 0:00:37.185 ******** 2026-03-28 03:51:46.823817 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:46.823832 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:46.823845 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:46.823858 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:46.823871 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:46.823883 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:46.823894 | orchestrator | 2026-03-28 03:51:46.823906 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-28 03:51:46.823918 | orchestrator | Saturday 28 March 2026 03:51:42 +0000 (0:00:00.613) 0:00:37.799 ******** 2026-03-28 03:51:46.823943 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:51:46.823957 | orchestrator | 2026-03-28 03:51:46.823969 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-28 03:51:46.823981 | orchestrator | Saturday 28 March 2026 03:51:44 +0000 (0:00:01.461) 0:00:39.260 ******** 2026-03-28 03:51:46.823994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:46.824018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:47.706235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:47.706349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:47.706390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:47.706414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:47.706465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:47.706487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:47.706527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:47.706541 | orchestrator | 2026-03-28 03:51:47.706555 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-28 03:51:47.706569 | orchestrator | Saturday 28 March 2026 03:51:46 +0000 (0:00:02.353) 0:00:41.614 ******** 2026-03-28 03:51:47.706581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:47.706612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:47.706635 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:47.706648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:47.706661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:47.706672 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:47.706691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:47.706716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:49.368520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:49.368618 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:49.368640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368676 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:49.368683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368690 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:49.368698 | orchestrator | 2026-03-28 03:51:49.368706 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-28 03:51:49.368714 | orchestrator | Saturday 28 March 2026 03:51:47 +0000 (0:00:00.889) 0:00:42.503 ******** 2026-03-28 03:51:49.368722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:49.368753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:49.368777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:51:49.368791 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:51:49.368798 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:51:49.368805 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:51:49.368812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368819 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:51:49.368826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:49.368833 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:51:49.368846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:51:57.169505 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:51:57.169586 | orchestrator | 2026-03-28 03:51:57.169617 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-28 03:51:57.169625 | orchestrator | Saturday 28 March 2026 03:51:49 +0000 (0:00:01.654) 0:00:44.157 ******** 2026-03-28 03:51:57.169634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:57.169705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:57.169712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:51:57.169719 | orchestrator | 2026-03-28 03:51:57.169725 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-28 03:51:57.169732 | orchestrator | Saturday 28 March 2026 03:51:51 +0000 (0:00:02.534) 0:00:46.692 ******** 2026-03-28 03:51:57.169738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:51:57.169755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.127581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.127708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.127729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.127745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:07.127761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:07.127776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:07.127820 | orchestrator | 2026-03-28 03:52:07.127837 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-28 03:52:07.127851 | orchestrator | Saturday 28 March 2026 03:51:57 +0000 (0:00:05.271) 0:00:51.963 ******** 2026-03-28 03:52:07.127882 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:52:07.127898 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:52:07.127913 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:52:07.127926 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:52:07.127941 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:52:07.127955 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:52:07.127969 | orchestrator | 2026-03-28 03:52:07.127983 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-28 03:52:07.127997 | orchestrator | Saturday 28 March 2026 03:51:58 +0000 (0:00:01.566) 0:00:53.530 ******** 2026-03-28 03:52:07.128010 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:52:07.128024 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:52:07.128038 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:52:07.128052 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:07.128066 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:07.128080 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:07.128094 | orchestrator | 2026-03-28 03:52:07.128110 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-28 03:52:07.128126 | orchestrator | Saturday 28 March 2026 03:51:59 +0000 (0:00:00.657) 0:00:54.188 ******** 2026-03-28 03:52:07.128144 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:07.128161 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:07.128199 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:52:07.128213 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:07.128227 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:52:07.128244 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:52:07.128261 | orchestrator | 2026-03-28 03:52:07.128280 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-28 03:52:07.128301 | orchestrator | Saturday 28 March 2026 03:52:01 +0000 (0:00:01.799) 0:00:55.987 ******** 2026-03-28 03:52:07.128323 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:07.128338 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:07.128351 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:07.128365 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:52:07.128378 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:52:07.128394 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:52:07.128410 | orchestrator | 2026-03-28 03:52:07.128427 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-28 03:52:07.128444 | orchestrator | Saturday 28 March 2026 03:52:02 +0000 (0:00:01.554) 0:00:57.542 ******** 2026-03-28 03:52:07.128465 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 03:52:07.128479 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 03:52:07.128493 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 03:52:07.128507 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 03:52:07.128522 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 03:52:07.128536 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 03:52:07.128550 | orchestrator | 2026-03-28 03:52:07.128564 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-28 03:52:07.128578 | orchestrator | Saturday 28 March 2026 03:52:04 +0000 (0:00:01.736) 0:00:59.278 ******** 2026-03-28 03:52:07.128609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.128626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.128641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:07.128667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:08.040388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:08.040497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:08.040537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:08.040546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:08.040570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:08.040578 | orchestrator | 2026-03-28 03:52:08.040587 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-28 03:52:08.040595 | orchestrator | Saturday 28 March 2026 03:52:07 +0000 (0:00:02.639) 0:01:01.918 ******** 2026-03-28 03:52:08.040604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:08.040627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:08.040635 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:52:08.040643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:08.040658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:08.040665 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:52:08.040674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:08.040686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:08.040696 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:52:08.040705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:08.040713 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:08.040728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.995616 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:11.995725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.995745 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:11.995758 | orchestrator | 2026-03-28 03:52:11.995771 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-28 03:52:11.995783 | orchestrator | Saturday 28 March 2026 03:52:08 +0000 (0:00:00.915) 0:01:02.834 ******** 2026-03-28 03:52:11.995794 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:52:11.995805 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:52:11.995816 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:52:11.995826 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:11.995837 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:11.995848 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:11.995859 | orchestrator | 2026-03-28 03:52:11.995870 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-28 03:52:11.995881 | orchestrator | Saturday 28 March 2026 03:52:08 +0000 (0:00:00.927) 0:01:03.762 ******** 2026-03-28 03:52:11.995894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.995908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:11.995921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.995932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:11.995990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.996003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 03:52:11.996015 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:52:11.996026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:52:11.996037 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:52:11.996049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.996061 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:11.996072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.996084 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:11.996095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-28 03:52:11.996117 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:11.996129 | orchestrator | 2026-03-28 03:52:11.996140 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-28 03:52:11.996151 | orchestrator | Saturday 28 March 2026 03:52:09 +0000 (0:00:01.013) 0:01:04.775 ******** 2026-03-28 03:52:11.996200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:50.575877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:50.575914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-28 03:52:50.575931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-28 03:52:50.575946 | orchestrator | 2026-03-28 03:52:50.575961 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-28 03:52:50.575977 | orchestrator | Saturday 28 March 2026 03:52:11 +0000 (0:00:02.011) 0:01:06.787 ******** 2026-03-28 03:52:50.575990 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:52:50.576004 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:52:50.576018 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:52:50.576031 | orchestrator | skipping: [testbed-node-3] 2026-03-28 03:52:50.576045 | orchestrator | skipping: [testbed-node-4] 2026-03-28 03:52:50.576058 | orchestrator | skipping: [testbed-node-5] 2026-03-28 03:52:50.576071 | orchestrator | 2026-03-28 03:52:50.576086 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-28 03:52:50.576100 | orchestrator | Saturday 28 March 2026 03:52:12 +0000 (0:00:00.637) 0:01:07.425 ******** 2026-03-28 03:52:50.576114 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:52:50.576128 | orchestrator | 2026-03-28 03:52:50.576142 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576154 | orchestrator | Saturday 28 March 2026 03:52:17 +0000 (0:00:05.091) 0:01:12.516 ******** 2026-03-28 03:52:50.576162 | orchestrator | 2026-03-28 03:52:50.576170 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576210 | orchestrator | Saturday 28 March 2026 03:52:17 +0000 (0:00:00.082) 0:01:12.599 ******** 2026-03-28 03:52:50.576219 | orchestrator | 2026-03-28 03:52:50.576229 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576248 | orchestrator | Saturday 28 March 2026 03:52:17 +0000 (0:00:00.075) 0:01:12.675 ******** 2026-03-28 03:52:50.576257 | orchestrator | 2026-03-28 03:52:50.576267 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576276 | orchestrator | Saturday 28 March 2026 03:52:18 +0000 (0:00:00.328) 0:01:13.003 ******** 2026-03-28 03:52:50.576286 | orchestrator | 2026-03-28 03:52:50.576295 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576304 | orchestrator | Saturday 28 March 2026 03:52:18 +0000 (0:00:00.073) 0:01:13.076 ******** 2026-03-28 03:52:50.576313 | orchestrator | 2026-03-28 03:52:50.576322 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-28 03:52:50.576330 | orchestrator | Saturday 28 March 2026 03:52:18 +0000 (0:00:00.076) 0:01:13.153 ******** 2026-03-28 03:52:50.576339 | orchestrator | 2026-03-28 03:52:50.576348 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-28 03:52:50.576357 | orchestrator | Saturday 28 March 2026 03:52:18 +0000 (0:00:00.074) 0:01:13.228 ******** 2026-03-28 03:52:50.576366 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:52:50.576375 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:52:50.576384 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:52:50.576393 | orchestrator | 2026-03-28 03:52:50.576402 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-28 03:52:50.576411 | orchestrator | Saturday 28 March 2026 03:52:29 +0000 (0:00:10.635) 0:01:23.864 ******** 2026-03-28 03:52:50.576420 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:52:50.576428 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:52:50.576437 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:52:50.576446 | orchestrator | 2026-03-28 03:52:50.576455 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-28 03:52:50.576465 | orchestrator | Saturday 28 March 2026 03:52:38 +0000 (0:00:09.639) 0:01:33.503 ******** 2026-03-28 03:52:50.576474 | orchestrator | changed: [testbed-node-4] 2026-03-28 03:52:50.576483 | orchestrator | changed: [testbed-node-5] 2026-03-28 03:52:50.576492 | orchestrator | changed: [testbed-node-3] 2026-03-28 03:52:50.576501 | orchestrator | 2026-03-28 03:52:50.576511 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:52:50.576521 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 03:52:50.576532 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 03:52:50.576548 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 03:52:51.158349 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 03:52:51.158433 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 03:52:51.158441 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-28 03:52:51.158447 | orchestrator | 2026-03-28 03:52:51.158451 | orchestrator | 2026-03-28 03:52:51.158456 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:52:51.158462 | orchestrator | Saturday 28 March 2026 03:52:50 +0000 (0:00:11.853) 0:01:45.357 ******** 2026-03-28 03:52:51.158466 | orchestrator | =============================================================================== 2026-03-28 03:52:51.158470 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.85s 2026-03-28 03:52:51.158473 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.64s 2026-03-28 03:52:51.158496 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.64s 2026-03-28 03:52:51.158500 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.27s 2026-03-28 03:52:51.158504 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 5.09s 2026-03-28 03:52:51.158508 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.02s 2026-03-28 03:52:51.158512 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.94s 2026-03-28 03:52:51.158516 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.76s 2026-03-28 03:52:51.158519 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.24s 2026-03-28 03:52:51.158523 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.64s 2026-03-28 03:52:51.158527 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.53s 2026-03-28 03:52:51.158531 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.35s 2026-03-28 03:52:51.158534 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 2.01s 2026-03-28 03:52:51.158538 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.85s 2026-03-28 03:52:51.158542 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.80s 2026-03-28 03:52:51.158547 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.74s 2026-03-28 03:52:51.158550 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.66s 2026-03-28 03:52:51.158554 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.65s 2026-03-28 03:52:51.158558 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.57s 2026-03-28 03:52:51.158562 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.55s 2026-03-28 03:52:53.654920 | orchestrator | 2026-03-28 03:52:53 | INFO  | Task bf535def-e081-489b-b422-16ce56db111e (aodh) was prepared for execution. 2026-03-28 03:52:53.654993 | orchestrator | 2026-03-28 03:52:53 | INFO  | It takes a moment until task bf535def-e081-489b-b422-16ce56db111e (aodh) has been started and output is visible here. 2026-03-28 03:53:26.747900 | orchestrator | 2026-03-28 03:53:26.748011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:53:26.748028 | orchestrator | 2026-03-28 03:53:26.748041 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:53:26.748052 | orchestrator | Saturday 28 March 2026 03:52:58 +0000 (0:00:00.264) 0:00:00.264 ******** 2026-03-28 03:53:26.748063 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:53:26.748074 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:53:26.748084 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:53:26.748094 | orchestrator | 2026-03-28 03:53:26.748105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:53:26.748116 | orchestrator | Saturday 28 March 2026 03:52:58 +0000 (0:00:00.353) 0:00:00.617 ******** 2026-03-28 03:53:26.748127 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-28 03:53:26.748139 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-28 03:53:26.748149 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-28 03:53:26.748160 | orchestrator | 2026-03-28 03:53:26.748171 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-28 03:53:26.748243 | orchestrator | 2026-03-28 03:53:26.748255 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-28 03:53:26.748265 | orchestrator | Saturday 28 March 2026 03:52:58 +0000 (0:00:00.461) 0:00:01.078 ******** 2026-03-28 03:53:26.748276 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:53:26.748288 | orchestrator | 2026-03-28 03:53:26.748299 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-28 03:53:26.748338 | orchestrator | Saturday 28 March 2026 03:52:59 +0000 (0:00:00.572) 0:00:01.651 ******** 2026-03-28 03:53:26.748350 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-28 03:53:26.748360 | orchestrator | 2026-03-28 03:53:26.748371 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-28 03:53:26.748382 | orchestrator | Saturday 28 March 2026 03:53:02 +0000 (0:00:03.570) 0:00:05.221 ******** 2026-03-28 03:53:26.748393 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-28 03:53:26.748404 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-28 03:53:26.748414 | orchestrator | 2026-03-28 03:53:26.748425 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-28 03:53:26.748437 | orchestrator | Saturday 28 March 2026 03:53:09 +0000 (0:00:06.550) 0:00:11.772 ******** 2026-03-28 03:53:26.748449 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:53:26.748461 | orchestrator | 2026-03-28 03:53:26.748471 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-28 03:53:26.748482 | orchestrator | Saturday 28 March 2026 03:53:13 +0000 (0:00:03.497) 0:00:15.269 ******** 2026-03-28 03:53:26.748495 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:53:26.748507 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-28 03:53:26.748518 | orchestrator | 2026-03-28 03:53:26.748528 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-28 03:53:26.748538 | orchestrator | Saturday 28 March 2026 03:53:17 +0000 (0:00:04.188) 0:00:19.457 ******** 2026-03-28 03:53:26.748548 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:53:26.748560 | orchestrator | 2026-03-28 03:53:26.748571 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-28 03:53:26.748581 | orchestrator | Saturday 28 March 2026 03:53:20 +0000 (0:00:03.377) 0:00:22.835 ******** 2026-03-28 03:53:26.748593 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-28 03:53:26.748603 | orchestrator | 2026-03-28 03:53:26.748614 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-28 03:53:26.748625 | orchestrator | Saturday 28 March 2026 03:53:24 +0000 (0:00:03.993) 0:00:26.828 ******** 2026-03-28 03:53:26.748640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:26.748677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:26.748702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:26.748717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:26.748730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:26.748743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:26.748755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:26.748774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:28.073685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:28.073771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:28.073779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:28.073783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:28.073789 | orchestrator | 2026-03-28 03:53:28.073794 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-28 03:53:28.073799 | orchestrator | Saturday 28 March 2026 03:53:26 +0000 (0:00:02.141) 0:00:28.970 ******** 2026-03-28 03:53:28.073804 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:53:28.073808 | orchestrator | 2026-03-28 03:53:28.073812 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-28 03:53:28.073816 | orchestrator | Saturday 28 March 2026 03:53:26 +0000 (0:00:00.137) 0:00:29.108 ******** 2026-03-28 03:53:28.073819 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:53:28.073823 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:53:28.073827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:53:28.073831 | orchestrator | 2026-03-28 03:53:28.073835 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-28 03:53:28.073838 | orchestrator | Saturday 28 March 2026 03:53:27 +0000 (0:00:00.532) 0:00:29.640 ******** 2026-03-28 03:53:28.073843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:28.073877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:28.073882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:28.073886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:28.073890 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:53:28.073895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:28.073899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:28.073903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:28.073914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.199951 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:53:33.200047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:33.200065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:33.200081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.200094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.200106 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:53:33.200118 | orchestrator | 2026-03-28 03:53:33.200132 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-28 03:53:33.200147 | orchestrator | Saturday 28 March 2026 03:53:28 +0000 (0:00:00.663) 0:00:30.304 ******** 2026-03-28 03:53:33.200241 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:53:33.200257 | orchestrator | 2026-03-28 03:53:33.200270 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-28 03:53:33.200282 | orchestrator | Saturday 28 March 2026 03:53:28 +0000 (0:00:00.818) 0:00:31.122 ******** 2026-03-28 03:53:33.200295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:33.200343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:33.200354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:33.200362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:33.200370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:33.200386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:33.200394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.200408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.861540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.861661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.861687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.861700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:33.861735 | orchestrator | 2026-03-28 03:53:33.861748 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-28 03:53:33.861760 | orchestrator | Saturday 28 March 2026 03:53:33 +0000 (0:00:04.306) 0:00:35.428 ******** 2026-03-28 03:53:33.861772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:33.861784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:33.861812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.861823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.861834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:53:33.861845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:33.861862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:33.861873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.861883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:33.861893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:53:33.861910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:35.039478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:35.039585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039638 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:53:35.039651 | orchestrator | 2026-03-28 03:53:35.039677 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-28 03:53:35.039690 | orchestrator | Saturday 28 March 2026 03:53:33 +0000 (0:00:00.663) 0:00:36.092 ******** 2026-03-28 03:53:35.039702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:35.039716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:35.039728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039770 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:53:35.039790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:35.039801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:35.039813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:35.039836 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:53:35.039854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-28 03:53:39.628644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 03:53:39.628795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 03:53:39.628819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 03:53:39.628832 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:53:39.628845 | orchestrator | 2026-03-28 03:53:39.628857 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-28 03:53:39.628868 | orchestrator | Saturday 28 March 2026 03:53:35 +0000 (0:00:01.174) 0:00:37.266 ******** 2026-03-28 03:53:39.628879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:39.628891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:39.628920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:39.628941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:39.628952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:39.628962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:39.628972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:39.628983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:39.628993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:39.629031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305604 | orchestrator | 2026-03-28 03:53:49.305617 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-28 03:53:49.305629 | orchestrator | Saturday 28 March 2026 03:53:39 +0000 (0:00:04.587) 0:00:41.854 ******** 2026-03-28 03:53:49.305641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:49.305654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:49.305665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:49.305715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:49.305806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647655 | orchestrator | 2026-03-28 03:53:54.647674 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-28 03:53:54.647687 | orchestrator | Saturday 28 March 2026 03:53:49 +0000 (0:00:09.675) 0:00:51.530 ******** 2026-03-28 03:53:54.647698 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:53:54.647710 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:53:54.647721 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:53:54.647732 | orchestrator | 2026-03-28 03:53:54.647743 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-28 03:53:54.647754 | orchestrator | Saturday 28 March 2026 03:53:51 +0000 (0:00:01.890) 0:00:53.420 ******** 2026-03-28 03:53:54.647767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:54.647780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:54.647814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-28 03:53:54.647844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:53:54.647968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:54:57.287481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-28 03:54:57.287594 | orchestrator | 2026-03-28 03:54:57.287609 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-28 03:54:57.287615 | orchestrator | Saturday 28 March 2026 03:53:54 +0000 (0:00:03.447) 0:00:56.868 ******** 2026-03-28 03:54:57.287620 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:54:57.287625 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:54:57.287630 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:54:57.287635 | orchestrator | 2026-03-28 03:54:57.287640 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-28 03:54:57.287645 | orchestrator | Saturday 28 March 2026 03:53:54 +0000 (0:00:00.325) 0:00:57.193 ******** 2026-03-28 03:54:57.287649 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287654 | orchestrator | 2026-03-28 03:54:57.287658 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-28 03:54:57.287663 | orchestrator | Saturday 28 March 2026 03:53:57 +0000 (0:00:02.212) 0:00:59.405 ******** 2026-03-28 03:54:57.287667 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287688 | orchestrator | 2026-03-28 03:54:57.287692 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-28 03:54:57.287697 | orchestrator | Saturday 28 March 2026 03:53:59 +0000 (0:00:02.396) 0:01:01.801 ******** 2026-03-28 03:54:57.287701 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287706 | orchestrator | 2026-03-28 03:54:57.287710 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-28 03:54:57.287714 | orchestrator | Saturday 28 March 2026 03:54:13 +0000 (0:00:13.754) 0:01:15.556 ******** 2026-03-28 03:54:57.287719 | orchestrator | 2026-03-28 03:54:57.287723 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-28 03:54:57.287728 | orchestrator | Saturday 28 March 2026 03:54:13 +0000 (0:00:00.074) 0:01:15.631 ******** 2026-03-28 03:54:57.287732 | orchestrator | 2026-03-28 03:54:57.287736 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-28 03:54:57.287741 | orchestrator | Saturday 28 March 2026 03:54:13 +0000 (0:00:00.074) 0:01:15.705 ******** 2026-03-28 03:54:57.287745 | orchestrator | 2026-03-28 03:54:57.287749 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-28 03:54:57.287754 | orchestrator | Saturday 28 March 2026 03:54:13 +0000 (0:00:00.279) 0:01:15.985 ******** 2026-03-28 03:54:57.287759 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287764 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:54:57.287768 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:54:57.287773 | orchestrator | 2026-03-28 03:54:57.287777 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-28 03:54:57.287782 | orchestrator | Saturday 28 March 2026 03:54:24 +0000 (0:00:11.064) 0:01:27.050 ******** 2026-03-28 03:54:57.287786 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:54:57.287791 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287795 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:54:57.287800 | orchestrator | 2026-03-28 03:54:57.287804 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-28 03:54:57.287808 | orchestrator | Saturday 28 March 2026 03:54:35 +0000 (0:00:10.620) 0:01:37.670 ******** 2026-03-28 03:54:57.287813 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:54:57.287817 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287821 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:54:57.287826 | orchestrator | 2026-03-28 03:54:57.287830 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-28 03:54:57.287835 | orchestrator | Saturday 28 March 2026 03:54:46 +0000 (0:00:10.676) 0:01:48.348 ******** 2026-03-28 03:54:57.287839 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:54:57.287844 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:54:57.287848 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:54:57.287852 | orchestrator | 2026-03-28 03:54:57.287857 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:54:57.287862 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:54:57.287868 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:54:57.287873 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:54:57.287877 | orchestrator | 2026-03-28 03:54:57.287881 | orchestrator | 2026-03-28 03:54:57.287886 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:54:57.287890 | orchestrator | Saturday 28 March 2026 03:54:56 +0000 (0:00:10.769) 0:01:59.117 ******** 2026-03-28 03:54:57.287894 | orchestrator | =============================================================================== 2026-03-28 03:54:57.287899 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.75s 2026-03-28 03:54:57.287903 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 11.07s 2026-03-28 03:54:57.287924 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.77s 2026-03-28 03:54:57.287929 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.68s 2026-03-28 03:54:57.287933 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.62s 2026-03-28 03:54:57.287938 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.68s 2026-03-28 03:54:57.287942 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.55s 2026-03-28 03:54:57.287946 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.59s 2026-03-28 03:54:57.287951 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.31s 2026-03-28 03:54:57.287955 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.19s 2026-03-28 03:54:57.287959 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.99s 2026-03-28 03:54:57.287964 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.57s 2026-03-28 03:54:57.287968 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.50s 2026-03-28 03:54:57.287972 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.45s 2026-03-28 03:54:57.287977 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.38s 2026-03-28 03:54:57.287981 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.40s 2026-03-28 03:54:57.287985 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.21s 2026-03-28 03:54:57.287990 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.14s 2026-03-28 03:54:57.287994 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.89s 2026-03-28 03:54:57.287998 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.17s 2026-03-28 03:54:59.810970 | orchestrator | 2026-03-28 03:54:59 | INFO  | Task 2013e128-0981-4fb8-b80f-c76d6e469850 (kolla-ceph-rgw) was prepared for execution. 2026-03-28 03:54:59.811056 | orchestrator | 2026-03-28 03:54:59 | INFO  | It takes a moment until task 2013e128-0981-4fb8-b80f-c76d6e469850 (kolla-ceph-rgw) has been started and output is visible here. 2026-03-28 03:55:36.594441 | orchestrator | 2026-03-28 03:55:36.594554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:55:36.594568 | orchestrator | 2026-03-28 03:55:36.594577 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:55:36.594585 | orchestrator | Saturday 28 March 2026 03:55:04 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-28 03:55:36.594594 | orchestrator | ok: [testbed-manager] 2026-03-28 03:55:36.594603 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:55:36.594611 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:55:36.594619 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:55:36.594626 | orchestrator | ok: [testbed-node-3] 2026-03-28 03:55:36.594634 | orchestrator | ok: [testbed-node-4] 2026-03-28 03:55:36.594642 | orchestrator | ok: [testbed-node-5] 2026-03-28 03:55:36.594650 | orchestrator | 2026-03-28 03:55:36.594658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:55:36.594666 | orchestrator | Saturday 28 March 2026 03:55:05 +0000 (0:00:00.913) 0:00:01.211 ******** 2026-03-28 03:55:36.594674 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594682 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594691 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594699 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594706 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594714 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594722 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-28 03:55:36.594753 | orchestrator | 2026-03-28 03:55:36.594762 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 03:55:36.594769 | orchestrator | 2026-03-28 03:55:36.594777 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-28 03:55:36.594785 | orchestrator | Saturday 28 March 2026 03:55:05 +0000 (0:00:00.805) 0:00:02.016 ******** 2026-03-28 03:55:36.594793 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 03:55:36.594803 | orchestrator | 2026-03-28 03:55:36.594811 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-28 03:55:36.594819 | orchestrator | Saturday 28 March 2026 03:55:07 +0000 (0:00:01.648) 0:00:03.665 ******** 2026-03-28 03:55:36.594827 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-28 03:55:36.594835 | orchestrator | 2026-03-28 03:55:36.594843 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-28 03:55:36.594851 | orchestrator | Saturday 28 March 2026 03:55:11 +0000 (0:00:03.589) 0:00:07.254 ******** 2026-03-28 03:55:36.594860 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-28 03:55:36.594870 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-28 03:55:36.594877 | orchestrator | 2026-03-28 03:55:36.594885 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-28 03:55:36.594893 | orchestrator | Saturday 28 March 2026 03:55:17 +0000 (0:00:06.677) 0:00:13.931 ******** 2026-03-28 03:55:36.594901 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-28 03:55:36.594909 | orchestrator | 2026-03-28 03:55:36.594917 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-28 03:55:36.594925 | orchestrator | Saturday 28 March 2026 03:55:21 +0000 (0:00:03.201) 0:00:17.132 ******** 2026-03-28 03:55:36.594932 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:55:36.594941 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-28 03:55:36.594953 | orchestrator | 2026-03-28 03:55:36.594966 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-28 03:55:36.595003 | orchestrator | Saturday 28 March 2026 03:55:24 +0000 (0:00:03.925) 0:00:21.058 ******** 2026-03-28 03:55:36.595018 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-28 03:55:36.595030 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-28 03:55:36.595042 | orchestrator | 2026-03-28 03:55:36.595055 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-28 03:55:36.595069 | orchestrator | Saturday 28 March 2026 03:55:31 +0000 (0:00:06.131) 0:00:27.190 ******** 2026-03-28 03:55:36.595083 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-28 03:55:36.595096 | orchestrator | 2026-03-28 03:55:36.595110 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:55:36.595121 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595139 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595153 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595168 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595204 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595251 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595263 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:36.595272 | orchestrator | 2026-03-28 03:55:36.595286 | orchestrator | 2026-03-28 03:55:36.595299 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:55:36.595313 | orchestrator | Saturday 28 March 2026 03:55:36 +0000 (0:00:04.967) 0:00:32.157 ******** 2026-03-28 03:55:36.595326 | orchestrator | =============================================================================== 2026-03-28 03:55:36.595340 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.68s 2026-03-28 03:55:36.595353 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.13s 2026-03-28 03:55:36.595365 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.97s 2026-03-28 03:55:36.595377 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.93s 2026-03-28 03:55:36.595390 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.59s 2026-03-28 03:55:36.595403 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.20s 2026-03-28 03:55:36.595418 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.65s 2026-03-28 03:55:36.595431 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2026-03-28 03:55:36.595444 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-28 03:55:39.046335 | orchestrator | 2026-03-28 03:55:39 | INFO  | Task 033872e5-4104-437a-94b6-bef03dacee7e (gnocchi) was prepared for execution. 2026-03-28 03:55:39.046433 | orchestrator | 2026-03-28 03:55:39 | INFO  | It takes a moment until task 033872e5-4104-437a-94b6-bef03dacee7e (gnocchi) has been started and output is visible here. 2026-03-28 03:55:44.527365 | orchestrator | 2026-03-28 03:55:44.527520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:55:44.527546 | orchestrator | 2026-03-28 03:55:44.527564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:55:44.527579 | orchestrator | Saturday 28 March 2026 03:55:43 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-03-28 03:55:44.527595 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:55:44.527610 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:55:44.527624 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:55:44.527634 | orchestrator | 2026-03-28 03:55:44.527643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:55:44.527652 | orchestrator | Saturday 28 March 2026 03:55:43 +0000 (0:00:00.352) 0:00:00.630 ******** 2026-03-28 03:55:44.527661 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-28 03:55:44.527670 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-28 03:55:44.527679 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-28 03:55:44.527688 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-28 03:55:44.527697 | orchestrator | 2026-03-28 03:55:44.527706 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-28 03:55:44.527715 | orchestrator | skipping: no hosts matched 2026-03-28 03:55:44.527725 | orchestrator | 2026-03-28 03:55:44.527733 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:55:44.527743 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:44.527753 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:44.527762 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 03:55:44.527812 | orchestrator | 2026-03-28 03:55:44.527832 | orchestrator | 2026-03-28 03:55:44.527847 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:55:44.527861 | orchestrator | Saturday 28 March 2026 03:55:44 +0000 (0:00:00.433) 0:00:01.063 ******** 2026-03-28 03:55:44.527874 | orchestrator | =============================================================================== 2026-03-28 03:55:44.527889 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-28 03:55:44.527903 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-28 03:55:47.047689 | orchestrator | 2026-03-28 03:55:47 | INFO  | Task 80af1a93-c918-4db0-8bc7-29df17e08633 (manila) was prepared for execution. 2026-03-28 03:55:47.047925 | orchestrator | 2026-03-28 03:55:47 | INFO  | It takes a moment until task 80af1a93-c918-4db0-8bc7-29df17e08633 (manila) has been started and output is visible here. 2026-03-28 03:56:29.002810 | orchestrator | 2026-03-28 03:56:29.002927 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 03:56:29.002943 | orchestrator | 2026-03-28 03:56:29.002954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 03:56:29.002965 | orchestrator | Saturday 28 March 2026 03:55:51 +0000 (0:00:00.283) 0:00:00.283 ******** 2026-03-28 03:56:29.002974 | orchestrator | ok: [testbed-node-0] 2026-03-28 03:56:29.002984 | orchestrator | ok: [testbed-node-1] 2026-03-28 03:56:29.002993 | orchestrator | ok: [testbed-node-2] 2026-03-28 03:56:29.003002 | orchestrator | 2026-03-28 03:56:29.003011 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 03:56:29.003020 | orchestrator | Saturday 28 March 2026 03:55:51 +0000 (0:00:00.326) 0:00:00.610 ******** 2026-03-28 03:56:29.003029 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-28 03:56:29.003038 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-28 03:56:29.003047 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-28 03:56:29.003056 | orchestrator | 2026-03-28 03:56:29.003064 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-28 03:56:29.003073 | orchestrator | 2026-03-28 03:56:29.003082 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-28 03:56:29.003090 | orchestrator | Saturday 28 March 2026 03:55:52 +0000 (0:00:00.492) 0:00:01.102 ******** 2026-03-28 03:56:29.003099 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:56:29.003108 | orchestrator | 2026-03-28 03:56:29.003117 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-28 03:56:29.003126 | orchestrator | Saturday 28 March 2026 03:55:52 +0000 (0:00:00.578) 0:00:01.680 ******** 2026-03-28 03:56:29.003135 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:56:29.003145 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:56:29.003200 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:56:29.003210 | orchestrator | 2026-03-28 03:56:29.003219 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-28 03:56:29.003228 | orchestrator | Saturday 28 March 2026 03:55:53 +0000 (0:00:00.508) 0:00:02.189 ******** 2026-03-28 03:56:29.003237 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-28 03:56:29.003246 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-28 03:56:29.003255 | orchestrator | 2026-03-28 03:56:29.003264 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-28 03:56:29.003273 | orchestrator | Saturday 28 March 2026 03:55:59 +0000 (0:00:06.503) 0:00:08.692 ******** 2026-03-28 03:56:29.003282 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-28 03:56:29.003291 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-28 03:56:29.003324 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-28 03:56:29.003336 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-28 03:56:29.003346 | orchestrator | 2026-03-28 03:56:29.003357 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-28 03:56:29.003367 | orchestrator | Saturday 28 March 2026 03:56:12 +0000 (0:00:12.757) 0:00:21.450 ******** 2026-03-28 03:56:29.003378 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 03:56:29.003388 | orchestrator | 2026-03-28 03:56:29.003398 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-28 03:56:29.003409 | orchestrator | Saturday 28 March 2026 03:56:15 +0000 (0:00:03.255) 0:00:24.705 ******** 2026-03-28 03:56:29.003418 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 03:56:29.003429 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-28 03:56:29.003439 | orchestrator | 2026-03-28 03:56:29.003449 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-28 03:56:29.003459 | orchestrator | Saturday 28 March 2026 03:56:19 +0000 (0:00:03.974) 0:00:28.680 ******** 2026-03-28 03:56:29.003470 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 03:56:29.003480 | orchestrator | 2026-03-28 03:56:29.003491 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-28 03:56:29.003501 | orchestrator | Saturday 28 March 2026 03:56:22 +0000 (0:00:03.126) 0:00:31.807 ******** 2026-03-28 03:56:29.003511 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-28 03:56:29.003522 | orchestrator | 2026-03-28 03:56:29.003531 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-28 03:56:29.003542 | orchestrator | Saturday 28 March 2026 03:56:26 +0000 (0:00:03.796) 0:00:35.603 ******** 2026-03-28 03:56:29.003573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:29.003589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:29.003601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:29.003619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:29.003632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:29.003642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:29.003659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:39.891520 | orchestrator | 2026-03-28 03:56:39.891542 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-28 03:56:39.891563 | orchestrator | Saturday 28 March 2026 03:56:29 +0000 (0:00:02.367) 0:00:37.970 ******** 2026-03-28 03:56:39.891582 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:56:39.891601 | orchestrator | 2026-03-28 03:56:39.891621 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-28 03:56:39.891639 | orchestrator | Saturday 28 March 2026 03:56:29 +0000 (0:00:00.617) 0:00:38.588 ******** 2026-03-28 03:56:39.891657 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:56:39.891673 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:56:39.891686 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:56:39.891698 | orchestrator | 2026-03-28 03:56:39.891710 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-28 03:56:39.891723 | orchestrator | Saturday 28 March 2026 03:56:30 +0000 (0:00:01.019) 0:00:39.607 ******** 2026-03-28 03:56:39.891738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.891789 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.891816 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.891850 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.891870 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.891889 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.891907 | orchestrator | 2026-03-28 03:56:39.891926 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-28 03:56:39.891945 | orchestrator | Saturday 28 March 2026 03:56:32 +0000 (0:00:01.814) 0:00:41.422 ******** 2026-03-28 03:56:39.891966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.891987 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.892006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.892020 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.892033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-28 03:56:39.892052 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-28 03:56:39.892071 | orchestrator | 2026-03-28 03:56:39.892089 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-28 03:56:39.892107 | orchestrator | Saturday 28 March 2026 03:56:33 +0000 (0:00:01.238) 0:00:42.660 ******** 2026-03-28 03:56:39.892126 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-28 03:56:39.892145 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-28 03:56:39.892198 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-28 03:56:39.892218 | orchestrator | 2026-03-28 03:56:39.892238 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-28 03:56:39.892250 | orchestrator | Saturday 28 March 2026 03:56:34 +0000 (0:00:00.705) 0:00:43.366 ******** 2026-03-28 03:56:39.892260 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:56:39.892272 | orchestrator | 2026-03-28 03:56:39.892282 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-28 03:56:39.892293 | orchestrator | Saturday 28 March 2026 03:56:34 +0000 (0:00:00.141) 0:00:43.508 ******** 2026-03-28 03:56:39.892304 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:56:39.892315 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:56:39.892326 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:56:39.892336 | orchestrator | 2026-03-28 03:56:39.892347 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-28 03:56:39.892361 | orchestrator | Saturday 28 March 2026 03:56:35 +0000 (0:00:00.517) 0:00:44.025 ******** 2026-03-28 03:56:39.892380 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 03:56:39.892403 | orchestrator | 2026-03-28 03:56:39.892430 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-28 03:56:39.892448 | orchestrator | Saturday 28 March 2026 03:56:35 +0000 (0:00:00.604) 0:00:44.629 ******** 2026-03-28 03:56:39.892502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:40.803662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:40.803777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:40.803795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:40.803957 | orchestrator | 2026-03-28 03:56:40.803971 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-28 03:56:40.803984 | orchestrator | Saturday 28 March 2026 03:56:39 +0000 (0:00:04.180) 0:00:48.810 ******** 2026-03-28 03:56:40.804003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:41.487418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487534 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:56:41.487545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:41.487583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487624 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:56:41.487633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:41.487642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:41.487673 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:56:41.487681 | orchestrator | 2026-03-28 03:56:41.487690 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-28 03:56:41.487700 | orchestrator | Saturday 28 March 2026 03:56:40 +0000 (0:00:00.955) 0:00:49.766 ******** 2026-03-28 03:56:41.487714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:46.231186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:56:46.231307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:46.231313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231340 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:56:46.231345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:56:46.231363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:56:46.231385 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:56:46.231390 | orchestrator | 2026-03-28 03:56:46.231395 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-28 03:56:46.231401 | orchestrator | Saturday 28 March 2026 03:56:41 +0000 (0:00:00.876) 0:00:50.642 ******** 2026-03-28 03:56:46.231422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:53.332911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:53.333051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:53.333077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:53.333364 | orchestrator | 2026-03-28 03:56:53.333384 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-28 03:56:53.333401 | orchestrator | Saturday 28 March 2026 03:56:46 +0000 (0:00:04.737) 0:00:55.379 ******** 2026-03-28 03:56:53.333442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:57.814271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:57.814422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:56:57.814444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:57.814482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:57.814531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:56:57.814559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:56:57.814600 | orchestrator | 2026-03-28 03:56:57.814615 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-28 03:56:57.814637 | orchestrator | Saturday 28 March 2026 03:56:53 +0000 (0:00:06.872) 0:01:02.251 ******** 2026-03-28 03:56:57.814653 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-28 03:56:57.814673 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-28 03:56:57.814686 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-28 03:56:57.814698 | orchestrator | 2026-03-28 03:56:57.814710 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-28 03:56:57.814763 | orchestrator | Saturday 28 March 2026 03:56:57 +0000 (0:00:03.787) 0:01:06.039 ******** 2026-03-28 03:56:57.814803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:57:01.317096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317296 | orchestrator | skipping: [testbed-node-0] 2026-03-28 03:57:01.317310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:57:01.317341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317418 | orchestrator | skipping: [testbed-node-1] 2026-03-28 03:57:01.317430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-28 03:57:01.317442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 03:57:01.317490 | orchestrator | skipping: [testbed-node-2] 2026-03-28 03:57:01.317501 | orchestrator | 2026-03-28 03:57:01.317515 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-28 03:57:01.317527 | orchestrator | Saturday 28 March 2026 03:56:57 +0000 (0:00:00.686) 0:01:06.726 ******** 2026-03-28 03:57:01.317547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:57:43.078133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:57:43.078314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-28 03:57:43.078339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-28 03:57:43.078555 | orchestrator | 2026-03-28 03:57:43.078570 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-28 03:57:43.078586 | orchestrator | Saturday 28 March 2026 03:57:01 +0000 (0:00:03.521) 0:01:10.247 ******** 2026-03-28 03:57:43.078600 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:57:43.078614 | orchestrator | 2026-03-28 03:57:43.078629 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-28 03:57:43.078643 | orchestrator | Saturday 28 March 2026 03:57:03 +0000 (0:00:02.230) 0:01:12.477 ******** 2026-03-28 03:57:43.078656 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:57:43.078668 | orchestrator | 2026-03-28 03:57:43.078681 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-28 03:57:43.078694 | orchestrator | Saturday 28 March 2026 03:57:05 +0000 (0:00:02.300) 0:01:14.778 ******** 2026-03-28 03:57:43.078707 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:57:43.078721 | orchestrator | 2026-03-28 03:57:43.078734 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-28 03:57:43.078747 | orchestrator | Saturday 28 March 2026 03:57:42 +0000 (0:00:36.870) 0:01:51.649 ******** 2026-03-28 03:57:43.078760 | orchestrator | 2026-03-28 03:57:43.078783 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-28 03:58:32.916794 | orchestrator | Saturday 28 March 2026 03:57:42 +0000 (0:00:00.074) 0:01:51.723 ******** 2026-03-28 03:58:32.916896 | orchestrator | 2026-03-28 03:58:32.916908 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-28 03:58:32.916918 | orchestrator | Saturday 28 March 2026 03:57:42 +0000 (0:00:00.083) 0:01:51.806 ******** 2026-03-28 03:58:32.916926 | orchestrator | 2026-03-28 03:58:32.916934 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-28 03:58:32.916943 | orchestrator | Saturday 28 March 2026 03:57:43 +0000 (0:00:00.078) 0:01:51.885 ******** 2026-03-28 03:58:32.916951 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:58:32.916960 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:58:32.916968 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:58:32.916976 | orchestrator | 2026-03-28 03:58:32.916984 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-28 03:58:32.916992 | orchestrator | Saturday 28 March 2026 03:57:58 +0000 (0:00:15.205) 0:02:07.090 ******** 2026-03-28 03:58:32.917000 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:58:32.917008 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:58:32.917016 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:58:32.917024 | orchestrator | 2026-03-28 03:58:32.917032 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-28 03:58:32.917064 | orchestrator | Saturday 28 March 2026 03:58:09 +0000 (0:00:11.382) 0:02:18.472 ******** 2026-03-28 03:58:32.917073 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:58:32.917080 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:58:32.917088 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:58:32.917096 | orchestrator | 2026-03-28 03:58:32.917104 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-28 03:58:32.917112 | orchestrator | Saturday 28 March 2026 03:58:15 +0000 (0:00:05.504) 0:02:23.977 ******** 2026-03-28 03:58:32.917120 | orchestrator | changed: [testbed-node-2] 2026-03-28 03:58:32.917128 | orchestrator | changed: [testbed-node-1] 2026-03-28 03:58:32.917136 | orchestrator | changed: [testbed-node-0] 2026-03-28 03:58:32.917144 | orchestrator | 2026-03-28 03:58:32.917212 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 03:58:32.917223 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 03:58:32.917233 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:58:32.917241 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 03:58:32.917249 | orchestrator | 2026-03-28 03:58:32.917257 | orchestrator | 2026-03-28 03:58:32.917265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 03:58:32.917273 | orchestrator | Saturday 28 March 2026 03:58:32 +0000 (0:00:17.286) 0:02:41.263 ******** 2026-03-28 03:58:32.917282 | orchestrator | =============================================================================== 2026-03-28 03:58:32.917289 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.87s 2026-03-28 03:58:32.917298 | orchestrator | manila : Restart manila-share container -------------------------------- 17.29s 2026-03-28 03:58:32.917305 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.21s 2026-03-28 03:58:32.917313 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.76s 2026-03-28 03:58:32.917321 | orchestrator | manila : Restart manila-data container --------------------------------- 11.38s 2026-03-28 03:58:32.917342 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.87s 2026-03-28 03:58:32.917351 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.50s 2026-03-28 03:58:32.917360 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.50s 2026-03-28 03:58:32.917369 | orchestrator | manila : Copying over config.json files for services -------------------- 4.74s 2026-03-28 03:58:32.917379 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.18s 2026-03-28 03:58:32.917388 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.97s 2026-03-28 03:58:32.917397 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.80s 2026-03-28 03:58:32.917406 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.79s 2026-03-28 03:58:32.917416 | orchestrator | manila : Check manila containers ---------------------------------------- 3.52s 2026-03-28 03:58:32.917425 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.26s 2026-03-28 03:58:32.917435 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.13s 2026-03-28 03:58:32.917444 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.37s 2026-03-28 03:58:32.917453 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.30s 2026-03-28 03:58:32.917462 | orchestrator | manila : Creating Manila database --------------------------------------- 2.23s 2026-03-28 03:58:32.917472 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.81s 2026-03-28 03:58:33.282640 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-28 03:58:45.486782 | orchestrator | 2026-03-28 03:58:45 | INFO  | Task 1c720eeb-0178-4786-9116-e98d80232405 (netdata) was prepared for execution. 2026-03-28 03:58:45.486899 | orchestrator | 2026-03-28 03:58:45 | INFO  | It takes a moment until task 1c720eeb-0178-4786-9116-e98d80232405 (netdata) has been started and output is visible here. 2026-03-28 04:00:24.935999 | orchestrator | 2026-03-28 04:00:24.936106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:00:24.936117 | orchestrator | 2026-03-28 04:00:24.936124 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:00:24.936131 | orchestrator | Saturday 28 March 2026 03:58:49 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-03-28 04:00:24.936139 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-28 04:00:24.936146 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-28 04:00:24.936198 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-28 04:00:24.936206 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-28 04:00:24.936213 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-28 04:00:24.936219 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-28 04:00:24.936225 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-28 04:00:24.936232 | orchestrator | 2026-03-28 04:00:24.936239 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-28 04:00:24.936245 | orchestrator | 2026-03-28 04:00:24.936251 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-28 04:00:24.936260 | orchestrator | Saturday 28 March 2026 03:58:50 +0000 (0:00:00.966) 0:00:01.231 ******** 2026-03-28 04:00:24.936273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:00:24.936286 | orchestrator | 2026-03-28 04:00:24.936298 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-28 04:00:24.936309 | orchestrator | Saturday 28 March 2026 03:58:52 +0000 (0:00:01.382) 0:00:02.614 ******** 2026-03-28 04:00:24.936319 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:24.936329 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:24.936336 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:24.936342 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:24.936349 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:24.936355 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:24.936361 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:24.936367 | orchestrator | 2026-03-28 04:00:24.936374 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-28 04:00:24.936380 | orchestrator | Saturday 28 March 2026 03:58:54 +0000 (0:00:01.896) 0:00:04.511 ******** 2026-03-28 04:00:24.936387 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:24.936398 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:24.936408 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:24.936418 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:24.936428 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:24.936438 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:24.936447 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:24.936457 | orchestrator | 2026-03-28 04:00:24.936467 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-28 04:00:24.936502 | orchestrator | Saturday 28 March 2026 03:58:56 +0000 (0:00:02.438) 0:00:06.950 ******** 2026-03-28 04:00:24.936525 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:00:24.936537 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.936548 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:00:24.936559 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:00:24.936569 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:00:24.936608 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:00:24.936620 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:00:24.936631 | orchestrator | 2026-03-28 04:00:24.936643 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-28 04:00:24.936668 | orchestrator | Saturday 28 March 2026 03:58:58 +0000 (0:00:01.573) 0:00:08.523 ******** 2026-03-28 04:00:24.936676 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.936684 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:00:24.936691 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:00:24.936698 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:00:24.936706 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:00:24.936713 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:00:24.936721 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:00:24.936728 | orchestrator | 2026-03-28 04:00:24.936735 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-28 04:00:24.936743 | orchestrator | Saturday 28 March 2026 03:59:13 +0000 (0:00:15.295) 0:00:23.819 ******** 2026-03-28 04:00:24.936750 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:00:24.936758 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:00:24.936765 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.936772 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:00:24.936779 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:00:24.936786 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:00:24.936793 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:00:24.936800 | orchestrator | 2026-03-28 04:00:24.936807 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-28 04:00:24.936815 | orchestrator | Saturday 28 March 2026 03:59:56 +0000 (0:00:42.571) 0:01:06.390 ******** 2026-03-28 04:00:24.936824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:00:24.936832 | orchestrator | 2026-03-28 04:00:24.936840 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-28 04:00:24.936847 | orchestrator | Saturday 28 March 2026 03:59:57 +0000 (0:00:01.774) 0:01:08.165 ******** 2026-03-28 04:00:24.936855 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-28 04:00:24.936862 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-28 04:00:24.936868 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-28 04:00:24.936885 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-28 04:00:24.936913 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-28 04:00:24.936920 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-28 04:00:24.936927 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-28 04:00:24.936933 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-28 04:00:24.936939 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-28 04:00:24.936945 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-28 04:00:24.936951 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-28 04:00:24.936957 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-28 04:00:24.936964 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-28 04:00:24.936970 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-28 04:00:24.936976 | orchestrator | 2026-03-28 04:00:24.936982 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-28 04:00:24.936989 | orchestrator | Saturday 28 March 2026 04:00:01 +0000 (0:00:03.930) 0:01:12.096 ******** 2026-03-28 04:00:24.936995 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:24.937002 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:24.937008 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:24.937014 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:24.937027 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:24.937033 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:24.937039 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:24.937045 | orchestrator | 2026-03-28 04:00:24.937051 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-28 04:00:24.937058 | orchestrator | Saturday 28 March 2026 04:00:03 +0000 (0:00:01.458) 0:01:13.555 ******** 2026-03-28 04:00:24.937064 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.937070 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:00:24.937076 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:00:24.937082 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:00:24.937088 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:00:24.937094 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:00:24.937100 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:00:24.937106 | orchestrator | 2026-03-28 04:00:24.937113 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-28 04:00:24.937119 | orchestrator | Saturday 28 March 2026 04:00:04 +0000 (0:00:01.336) 0:01:14.892 ******** 2026-03-28 04:00:24.937125 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:24.937131 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:24.937137 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:24.937143 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:24.937149 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:24.937173 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:24.937180 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:24.937186 | orchestrator | 2026-03-28 04:00:24.937192 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-28 04:00:24.937198 | orchestrator | Saturday 28 March 2026 04:00:05 +0000 (0:00:01.288) 0:01:16.180 ******** 2026-03-28 04:00:24.937205 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:24.937211 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:24.937217 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:24.937223 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:24.937229 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:24.937235 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:24.937241 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:24.937247 | orchestrator | 2026-03-28 04:00:24.937253 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-28 04:00:24.937259 | orchestrator | Saturday 28 March 2026 04:00:08 +0000 (0:00:02.149) 0:01:18.329 ******** 2026-03-28 04:00:24.937266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-28 04:00:24.937279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:00:24.937286 | orchestrator | 2026-03-28 04:00:24.937292 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-28 04:00:24.937298 | orchestrator | Saturday 28 March 2026 04:00:09 +0000 (0:00:01.472) 0:01:19.801 ******** 2026-03-28 04:00:24.937304 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.937310 | orchestrator | 2026-03-28 04:00:24.937316 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-28 04:00:24.937322 | orchestrator | Saturday 28 March 2026 04:00:13 +0000 (0:00:04.238) 0:01:24.040 ******** 2026-03-28 04:00:24.937328 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:00:24.937335 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:00:24.937341 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:00:24.937347 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:00:24.937353 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:00:24.937359 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:00:24.937365 | orchestrator | changed: [testbed-manager] 2026-03-28 04:00:24.937371 | orchestrator | 2026-03-28 04:00:24.937377 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:00:24.937390 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:24.937398 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:24.937404 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:24.937410 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:24.937421 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:25.404346 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:25.404452 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:00:25.404472 | orchestrator | 2026-03-28 04:00:25.404482 | orchestrator | 2026-03-28 04:00:25.404494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:00:25.404505 | orchestrator | Saturday 28 March 2026 04:00:24 +0000 (0:00:11.147) 0:01:35.187 ******** 2026-03-28 04:00:25.404514 | orchestrator | =============================================================================== 2026-03-28 04:00:25.404523 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.57s 2026-03-28 04:00:25.404532 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.30s 2026-03-28 04:00:25.404540 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.15s 2026-03-28 04:00:25.404549 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 4.24s 2026-03-28 04:00:25.404557 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.93s 2026-03-28 04:00:25.404566 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.44s 2026-03-28 04:00:25.404574 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.15s 2026-03-28 04:00:25.404583 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.90s 2026-03-28 04:00:25.404592 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.77s 2026-03-28 04:00:25.404600 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.57s 2026-03-28 04:00:25.404609 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.47s 2026-03-28 04:00:25.404617 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.46s 2026-03-28 04:00:25.404626 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.38s 2026-03-28 04:00:25.404635 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.34s 2026-03-28 04:00:25.404644 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.29s 2026-03-28 04:00:25.404652 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-03-28 04:00:27.916522 | orchestrator | 2026-03-28 04:00:27 | INFO  | Task 5cb5672d-09c8-4673-be33-3ad4bd3e73c6 (prometheus) was prepared for execution. 2026-03-28 04:00:27.916629 | orchestrator | 2026-03-28 04:00:27 | INFO  | It takes a moment until task 5cb5672d-09c8-4673-be33-3ad4bd3e73c6 (prometheus) has been started and output is visible here. 2026-03-28 04:00:37.889560 | orchestrator | 2026-03-28 04:00:37.889694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:00:37.889718 | orchestrator | 2026-03-28 04:00:37.889735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:00:37.889752 | orchestrator | Saturday 28 March 2026 04:00:32 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-03-28 04:00:37.889797 | orchestrator | ok: [testbed-manager] 2026-03-28 04:00:37.889816 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:00:37.889832 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:00:37.889864 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:00:37.889882 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:00:37.889899 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:00:37.889915 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:00:37.889932 | orchestrator | 2026-03-28 04:00:37.889948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:00:37.889964 | orchestrator | Saturday 28 March 2026 04:00:33 +0000 (0:00:00.947) 0:00:01.235 ******** 2026-03-28 04:00:37.889980 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-28 04:00:37.889998 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890083 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890102 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890119 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890135 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890188 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-28 04:00:37.890208 | orchestrator | 2026-03-28 04:00:37.890224 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-28 04:00:37.890241 | orchestrator | 2026-03-28 04:00:37.890257 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 04:00:37.890273 | orchestrator | Saturday 28 March 2026 04:00:34 +0000 (0:00:01.025) 0:00:02.261 ******** 2026-03-28 04:00:37.890290 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:00:37.890308 | orchestrator | 2026-03-28 04:00:37.890324 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-28 04:00:37.890340 | orchestrator | Saturday 28 March 2026 04:00:35 +0000 (0:00:01.538) 0:00:03.799 ******** 2026-03-28 04:00:37.890361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:37.890384 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 04:00:37.890402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:37.890434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:37.890483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:37.890504 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:37.890520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:37.890537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:37.890553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:37.890571 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:37.890590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:37.890626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:38.785419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:38.785542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785561 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:38.785577 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 04:00:38.785591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:38.785681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:38.785761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:38.785791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:38.785853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:43.897319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:43.897430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:43.897448 | orchestrator | 2026-03-28 04:00:43.897479 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 04:00:43.897493 | orchestrator | Saturday 28 March 2026 04:00:38 +0000 (0:00:02.879) 0:00:06.679 ******** 2026-03-28 04:00:43.897516 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:00:43.897530 | orchestrator | 2026-03-28 04:00:43.897541 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-28 04:00:43.897552 | orchestrator | Saturday 28 March 2026 04:00:40 +0000 (0:00:01.785) 0:00:08.465 ******** 2026-03-28 04:00:43.897566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 04:00:43.897605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:43.897733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:43.897748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:43.897767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:43.897785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:43.897813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.213874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.213962 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.213994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:46.214002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:46.214061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:46.214081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 04:00:46.214125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:46.214145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:46.214151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:46.214184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:47.883112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:47.883282 | orchestrator | 2026-03-28 04:00:47.883296 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-28 04:00:47.883303 | orchestrator | Saturday 28 March 2026 04:00:46 +0000 (0:00:05.641) 0:00:14.106 ******** 2026-03-28 04:00:47.883311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 04:00:47.883318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:47.883325 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:47.883387 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 04:00:47.883418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:47.883429 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:00:47.883440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:47.883457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:47.883463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:47.883469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:47.883475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:47.883480 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:00:47.883486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:47.883496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:47.883508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:48.121700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:48.121804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:48.121811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:48.121818 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:00:48.121825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:48.121848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:48.121878 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:00:48.121898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:48.121905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121918 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:00:48.121924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:48.121930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:48.121942 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:00:48.121953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:48.121968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:49.333772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:49.333892 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:00:49.333909 | orchestrator | 2026-03-28 04:00:49.333920 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-28 04:00:49.333931 | orchestrator | Saturday 28 March 2026 04:00:48 +0000 (0:00:01.914) 0:00:16.021 ******** 2026-03-28 04:00:49.333941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:49.333951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:49.333961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:49.333972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:49.334000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:49.334098 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-28 04:00:49.334111 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:49.334121 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:49.334133 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-28 04:00:49.334145 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:49.334225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:49.334244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:49.334262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:50.643463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:00:50.643567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:50.643593 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:00:50.643601 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:00:50.643610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:50.643619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:50.643628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:50.643671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:00:50.643689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:00:50.643736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:50.643746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643763 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:00:50.643771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:50.643780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:50.643807 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:00:50.643816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:00:50.643832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:00:54.373119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:00:54.373331 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:00:54.373351 | orchestrator | 2026-03-28 04:00:54.373366 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-28 04:00:54.373393 | orchestrator | Saturday 28 March 2026 04:00:50 +0000 (0:00:02.502) 0:00:18.524 ******** 2026-03-28 04:00:54.373407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373431 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 04:00:54.373475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373567 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:00:54.373579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:54.373600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:54.373612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:54.373630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:54.373644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:54.373666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236768 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.236882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.236892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.236961 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 04:00:57.236974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.236992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.237001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:00:57.237011 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.237016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.237022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:00:57.237035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:01:01.593720 | orchestrator | 2026-03-28 04:01:01.593794 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-28 04:01:01.593802 | orchestrator | Saturday 28 March 2026 04:00:57 +0000 (0:00:06.603) 0:00:25.127 ******** 2026-03-28 04:01:01.593807 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:01:01.593812 | orchestrator | 2026-03-28 04:01:01.593817 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-28 04:01:01.593845 | orchestrator | Saturday 28 March 2026 04:00:58 +0000 (0:00:01.096) 0:00:26.223 ******** 2026-03-28 04:01:01.593852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593859 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593863 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593878 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:01.593883 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593888 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593904 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593913 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593917 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593922 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593929 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100502, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593934 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:01.593946 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588811 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588898 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588927 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588934 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588940 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100528, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3521235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:03.588962 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588981 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588987 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.588993 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.589002 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.589008 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.589014 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.589025 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:03.589035 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883506 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883544 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883569 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100496, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3457913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:05.883603 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883616 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883646 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883677 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883689 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883700 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883719 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883731 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:05.883750 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.554962 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555125 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555135 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100520, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.350146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:07.555143 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555152 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555219 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555233 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555256 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555264 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555272 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555281 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:07.555296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364519 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364645 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364655 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364664 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364673 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100492, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:09.364683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364714 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364730 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364739 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364748 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364757 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364766 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364776 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:09.364794 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.076183 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077230 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077288 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077303 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077316 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077327 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077369 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100505, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3463018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:11.077443 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077456 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077468 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077479 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077491 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:11.077536 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845558 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845651 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845663 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845672 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845679 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845708 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845729 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845754 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845763 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845771 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100517, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3494625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:12.845786 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845800 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:01:12.845809 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845821 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:12.845833 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120768 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120879 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:01:19.120898 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120911 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120962 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.120989 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121033 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121046 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:01:19.121058 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121069 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:01:19.121081 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121101 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-28 04:01:19.121113 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:01:19.121125 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:01:19.121141 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100507, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3474898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:19.121184 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100500, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.346182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:19.121214 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100526, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.351302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494147 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100487, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.343178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494302 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100753, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100524, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3508978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494337 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100494, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3443017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100490, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3442922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494360 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100514, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494366 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100510, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3489857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494384 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100750, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3971443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 04:01:47.494391 | orchestrator | 2026-03-28 04:01:47.494400 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-28 04:01:47.494412 | orchestrator | Saturday 28 March 2026 04:01:26 +0000 (0:00:28.626) 0:00:54.850 ******** 2026-03-28 04:01:47.494422 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:01:47.494432 | orchestrator | 2026-03-28 04:01:47.494441 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-28 04:01:47.494457 | orchestrator | Saturday 28 March 2026 04:01:27 +0000 (0:00:00.784) 0:00:55.634 ******** 2026-03-28 04:01:47.494466 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494485 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494503 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494512 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494521 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494531 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494550 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494560 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494579 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494599 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494605 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494616 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494627 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494633 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494644 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494657 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494666 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494683 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494705 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494714 | orchestrator | [WARNING]: Skipped 2026-03-28 04:01:47.494723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494732 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-28 04:01:47.494740 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 04:01:47.494749 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-28 04:01:47.494757 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:01:47.494766 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:01:47.494774 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 04:01:47.494782 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 04:01:47.494791 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 04:01:47.494799 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 04:01:47.494808 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 04:01:47.494816 | orchestrator | 2026-03-28 04:01:47.494825 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-28 04:01:47.494883 | orchestrator | Saturday 28 March 2026 04:01:29 +0000 (0:00:01.967) 0:00:57.602 ******** 2026-03-28 04:01:47.494893 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:01:47.494903 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:01:47.494911 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:01:47.494920 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:01:47.494928 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:01:47.494937 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:01:47.494952 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:02:04.965743 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.965881 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:02:04.965908 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.965928 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 04:02:04.965947 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.965966 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-28 04:02:04.965984 | orchestrator | 2026-03-28 04:02:04.966004 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-28 04:02:04.966103 | orchestrator | Saturday 28 March 2026 04:01:47 +0000 (0:00:17.785) 0:01:15.388 ******** 2026-03-28 04:02:04.966127 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966139 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.966150 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966203 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.966225 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966243 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.966261 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966273 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.966284 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966296 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.966307 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 04:02:04.966319 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.966330 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-28 04:02:04.966341 | orchestrator | 2026-03-28 04:02:04.966352 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-28 04:02:04.966364 | orchestrator | Saturday 28 March 2026 04:01:50 +0000 (0:00:02.787) 0:01:18.176 ******** 2026-03-28 04:02:04.966375 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966388 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.966400 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966412 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.966424 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.966446 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966486 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.966498 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-28 04:02:04.966510 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966536 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.966547 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 04:02:04.966558 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.966569 | orchestrator | 2026-03-28 04:02:04.966583 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-28 04:02:04.966596 | orchestrator | Saturday 28 March 2026 04:01:52 +0000 (0:00:01.897) 0:01:20.073 ******** 2026-03-28 04:02:04.966609 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:02:04.966622 | orchestrator | 2026-03-28 04:02:04.966635 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-28 04:02:04.966649 | orchestrator | Saturday 28 March 2026 04:01:52 +0000 (0:00:00.744) 0:01:20.818 ******** 2026-03-28 04:02:04.966661 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:04.966678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.966697 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.966715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.966733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.966749 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.966767 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.966785 | orchestrator | 2026-03-28 04:02:04.966804 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-28 04:02:04.966823 | orchestrator | Saturday 28 March 2026 04:01:53 +0000 (0:00:00.778) 0:01:21.596 ******** 2026-03-28 04:02:04.966921 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:04.966936 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.966947 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.966957 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.966968 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:02:04.966979 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:02:04.966990 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:02:04.967001 | orchestrator | 2026-03-28 04:02:04.967012 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-28 04:02:04.967045 | orchestrator | Saturday 28 March 2026 04:01:55 +0000 (0:00:02.294) 0:01:23.891 ******** 2026-03-28 04:02:04.967056 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967068 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967079 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:04.967089 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967100 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967111 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.967122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.967133 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.967144 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967178 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.967191 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967202 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.967213 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 04:02:04.967236 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.967247 | orchestrator | 2026-03-28 04:02:04.967259 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-28 04:02:04.967269 | orchestrator | Saturday 28 March 2026 04:01:57 +0000 (0:00:01.550) 0:01:25.442 ******** 2026-03-28 04:02:04.967281 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967292 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967303 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.967314 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.967328 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967347 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.967364 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967382 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.967398 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967416 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.967434 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-28 04:02:04.967450 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 04:02:04.967466 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.967481 | orchestrator | 2026-03-28 04:02:04.967498 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-28 04:02:04.967515 | orchestrator | Saturday 28 March 2026 04:01:59 +0000 (0:00:01.488) 0:01:26.930 ******** 2026-03-28 04:02:04.967533 | orchestrator | [WARNING]: Skipped 2026-03-28 04:02:04.967556 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-28 04:02:04.967575 | orchestrator | due to this access issue: 2026-03-28 04:02:04.967593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-28 04:02:04.967612 | orchestrator | not a directory 2026-03-28 04:02:04.967643 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:02:04.967661 | orchestrator | 2026-03-28 04:02:04.967677 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-28 04:02:04.967688 | orchestrator | Saturday 28 March 2026 04:02:00 +0000 (0:00:01.171) 0:01:28.102 ******** 2026-03-28 04:02:04.967699 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:04.967710 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.967721 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.967732 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.967743 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.967754 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.967765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.967776 | orchestrator | 2026-03-28 04:02:04.967787 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-28 04:02:04.967798 | orchestrator | Saturday 28 March 2026 04:02:01 +0000 (0:00:01.005) 0:01:29.107 ******** 2026-03-28 04:02:04.967809 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:04.967820 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:02:04.967830 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:02:04.967841 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:02:04.967852 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:02:04.967862 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:02:04.967873 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:02:04.967884 | orchestrator | 2026-03-28 04:02:04.967895 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-28 04:02:04.967906 | orchestrator | Saturday 28 March 2026 04:02:02 +0000 (0:00:01.089) 0:01:30.196 ******** 2026-03-28 04:02:04.967940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.693804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.693935 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-28 04:02:06.693952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.693961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.693985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.693993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.694090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:06.694128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:06.694140 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 04:02:06.694152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:06.694204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:06.694217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:06.694238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:06.694262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:06.694284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.750830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.750854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750891 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-28 04:02:08.750938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 04:02:08.750968 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.750977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.750992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.751007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:02:08.751017 | orchestrator | 2026-03-28 04:02:08.751028 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-28 04:02:08.751039 | orchestrator | Saturday 28 March 2026 04:02:06 +0000 (0:00:04.396) 0:01:34.592 ******** 2026-03-28 04:02:08.751048 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 04:02:08.751058 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:02:08.751066 | orchestrator | 2026-03-28 04:02:08.751075 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:02:08.751084 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:01.516) 0:01:36.109 ******** 2026-03-28 04:02:08.751092 | orchestrator | 2026-03-28 04:02:08.751100 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:02:08.751109 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.076) 0:01:36.185 ******** 2026-03-28 04:02:08.751117 | orchestrator | 2026-03-28 04:02:08.751126 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:02:08.751135 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.074) 0:01:36.260 ******** 2026-03-28 04:02:08.751144 | orchestrator | 2026-03-28 04:02:08.751152 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:02:08.751209 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.076) 0:01:36.336 ******** 2026-03-28 04:04:03.605740 | orchestrator | 2026-03-28 04:04:03.605838 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:04:03.605851 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.070) 0:01:36.407 ******** 2026-03-28 04:04:03.605859 | orchestrator | 2026-03-28 04:04:03.605867 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:04:03.605875 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.063) 0:01:36.471 ******** 2026-03-28 04:04:03.605883 | orchestrator | 2026-03-28 04:04:03.605891 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 04:04:03.605899 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.068) 0:01:36.539 ******** 2026-03-28 04:04:03.605906 | orchestrator | 2026-03-28 04:04:03.605914 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-28 04:04:03.605921 | orchestrator | Saturday 28 March 2026 04:02:08 +0000 (0:00:00.098) 0:01:36.638 ******** 2026-03-28 04:04:03.605929 | orchestrator | changed: [testbed-manager] 2026-03-28 04:04:03.605937 | orchestrator | 2026-03-28 04:04:03.605945 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-28 04:04:03.605953 | orchestrator | Saturday 28 March 2026 04:02:37 +0000 (0:00:28.408) 0:02:05.047 ******** 2026-03-28 04:04:03.605960 | orchestrator | changed: [testbed-manager] 2026-03-28 04:04:03.605968 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:04:03.605975 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:04:03.605983 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:04:03.605990 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:04:03.605997 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:04:03.606006 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:04:03.606060 | orchestrator | 2026-03-28 04:04:03.606069 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-28 04:04:03.606077 | orchestrator | Saturday 28 March 2026 04:02:50 +0000 (0:00:13.354) 0:02:18.401 ******** 2026-03-28 04:04:03.606105 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:04:03.606121 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:04:03.606129 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:04:03.606136 | orchestrator | 2026-03-28 04:04:03.606144 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-28 04:04:03.606152 | orchestrator | Saturday 28 March 2026 04:03:01 +0000 (0:00:10.900) 0:02:29.302 ******** 2026-03-28 04:04:03.606187 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:04:03.606199 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:04:03.606211 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:04:03.606223 | orchestrator | 2026-03-28 04:04:03.606234 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-28 04:04:03.606245 | orchestrator | Saturday 28 March 2026 04:03:12 +0000 (0:00:10.670) 0:02:39.972 ******** 2026-03-28 04:04:03.606258 | orchestrator | changed: [testbed-manager] 2026-03-28 04:04:03.606270 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:04:03.606281 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:04:03.606293 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:04:03.606305 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:04:03.606317 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:04:03.606329 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:04:03.606340 | orchestrator | 2026-03-28 04:04:03.606352 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-28 04:04:03.606364 | orchestrator | Saturday 28 March 2026 04:03:26 +0000 (0:00:14.761) 0:02:54.733 ******** 2026-03-28 04:04:03.606377 | orchestrator | changed: [testbed-manager] 2026-03-28 04:04:03.606389 | orchestrator | 2026-03-28 04:04:03.606401 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-28 04:04:03.606429 | orchestrator | Saturday 28 March 2026 04:03:35 +0000 (0:00:08.987) 0:03:03.720 ******** 2026-03-28 04:04:03.606443 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:04:03.606456 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:04:03.606466 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:04:03.606475 | orchestrator | 2026-03-28 04:04:03.606484 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-28 04:04:03.606493 | orchestrator | Saturday 28 March 2026 04:03:47 +0000 (0:00:11.260) 0:03:14.980 ******** 2026-03-28 04:04:03.606501 | orchestrator | changed: [testbed-manager] 2026-03-28 04:04:03.606510 | orchestrator | 2026-03-28 04:04:03.606518 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-28 04:04:03.606527 | orchestrator | Saturday 28 March 2026 04:03:57 +0000 (0:00:10.686) 0:03:25.667 ******** 2026-03-28 04:04:03.606535 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:04:03.606544 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:04:03.606552 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:04:03.606560 | orchestrator | 2026-03-28 04:04:03.606569 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:04:03.606579 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 04:04:03.606590 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 04:04:03.606599 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 04:04:03.606608 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 04:04:03.606617 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 04:04:03.606641 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 04:04:03.606659 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 04:04:03.606669 | orchestrator | 2026-03-28 04:04:03.606678 | orchestrator | 2026-03-28 04:04:03.606685 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:04:03.606693 | orchestrator | Saturday 28 March 2026 04:04:02 +0000 (0:00:05.218) 0:03:30.886 ******** 2026-03-28 04:04:03.606700 | orchestrator | =============================================================================== 2026-03-28 04:04:03.606707 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.63s 2026-03-28 04:04:03.606715 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 28.41s 2026-03-28 04:04:03.606722 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.79s 2026-03-28 04:04:03.606729 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.76s 2026-03-28 04:04:03.606737 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.35s 2026-03-28 04:04:03.606744 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.26s 2026-03-28 04:04:03.606751 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.90s 2026-03-28 04:04:03.606758 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.69s 2026-03-28 04:04:03.606766 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.67s 2026-03-28 04:04:03.606773 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.99s 2026-03-28 04:04:03.606780 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.60s 2026-03-28 04:04:03.606787 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.64s 2026-03-28 04:04:03.606795 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.22s 2026-03-28 04:04:03.606802 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.40s 2026-03-28 04:04:03.606810 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.88s 2026-03-28 04:04:03.606817 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.79s 2026-03-28 04:04:03.606824 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.50s 2026-03-28 04:04:03.606831 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.29s 2026-03-28 04:04:03.606839 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.97s 2026-03-28 04:04:03.606846 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.91s 2026-03-28 04:04:07.957592 | orchestrator | 2026-03-28 04:04:07 | INFO  | Task a521c505-f529-4e6c-84fd-672081b096e0 (grafana) was prepared for execution. 2026-03-28 04:04:07.957664 | orchestrator | 2026-03-28 04:04:07 | INFO  | It takes a moment until task a521c505-f529-4e6c-84fd-672081b096e0 (grafana) has been started and output is visible here. 2026-03-28 04:04:18.551928 | orchestrator | 2026-03-28 04:04:18.552033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:04:18.552046 | orchestrator | 2026-03-28 04:04:18.552055 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:04:18.552062 | orchestrator | Saturday 28 March 2026 04:04:12 +0000 (0:00:00.295) 0:00:00.295 ******** 2026-03-28 04:04:18.552071 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:04:18.552079 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:04:18.552087 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:04:18.552094 | orchestrator | 2026-03-28 04:04:18.552101 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:04:18.552108 | orchestrator | Saturday 28 March 2026 04:04:12 +0000 (0:00:00.340) 0:00:00.636 ******** 2026-03-28 04:04:18.552135 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-28 04:04:18.552144 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-28 04:04:18.552151 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-28 04:04:18.552189 | orchestrator | 2026-03-28 04:04:18.552198 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-28 04:04:18.552205 | orchestrator | 2026-03-28 04:04:18.552212 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 04:04:18.552219 | orchestrator | Saturday 28 March 2026 04:04:13 +0000 (0:00:00.486) 0:00:01.123 ******** 2026-03-28 04:04:18.552227 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:04:18.552235 | orchestrator | 2026-03-28 04:04:18.552242 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-28 04:04:18.552249 | orchestrator | Saturday 28 March 2026 04:04:14 +0000 (0:00:00.649) 0:00:01.772 ******** 2026-03-28 04:04:18.552259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552286 | orchestrator | 2026-03-28 04:04:18.552293 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-28 04:04:18.552300 | orchestrator | Saturday 28 March 2026 04:04:15 +0000 (0:00:00.976) 0:00:02.748 ******** 2026-03-28 04:04:18.552308 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-28 04:04:18.552315 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-28 04:04:18.552323 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:04:18.552330 | orchestrator | 2026-03-28 04:04:18.552338 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-28 04:04:18.552345 | orchestrator | Saturday 28 March 2026 04:04:15 +0000 (0:00:00.984) 0:00:03.732 ******** 2026-03-28 04:04:18.552352 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:04:18.552366 | orchestrator | 2026-03-28 04:04:18.552373 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-28 04:04:18.552380 | orchestrator | Saturday 28 March 2026 04:04:16 +0000 (0:00:00.611) 0:00:04.344 ******** 2026-03-28 04:04:18.552407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:18.552431 | orchestrator | 2026-03-28 04:04:18.552439 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-28 04:04:18.552446 | orchestrator | Saturday 28 March 2026 04:04:17 +0000 (0:00:01.347) 0:00:05.692 ******** 2026-03-28 04:04:18.552453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:18.552461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:18.552474 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:04:18.552482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:04:18.552501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:25.860058 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:04:25.860226 | orchestrator | 2026-03-28 04:04:25.860249 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-28 04:04:25.860262 | orchestrator | Saturday 28 March 2026 04:04:18 +0000 (0:00:00.597) 0:00:06.289 ******** 2026-03-28 04:04:25.860275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:25.860290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:25.860301 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:04:25.860312 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:04:25.860324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-28 04:04:25.860335 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:04:25.860346 | orchestrator | 2026-03-28 04:04:25.860357 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-28 04:04:25.860368 | orchestrator | Saturday 28 March 2026 04:04:19 +0000 (0:00:00.661) 0:00:06.951 ******** 2026-03-28 04:04:25.860380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860452 | orchestrator | 2026-03-28 04:04:25.860459 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-28 04:04:25.860465 | orchestrator | Saturday 28 March 2026 04:04:20 +0000 (0:00:01.328) 0:00:08.280 ******** 2026-03-28 04:04:25.860472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:04:25.860500 | orchestrator | 2026-03-28 04:04:25.860506 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-28 04:04:25.860512 | orchestrator | Saturday 28 March 2026 04:04:22 +0000 (0:00:01.682) 0:00:09.963 ******** 2026-03-28 04:04:25.860519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:04:25.860525 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:04:25.860531 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:04:25.860537 | orchestrator | 2026-03-28 04:04:25.860543 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-28 04:04:25.860549 | orchestrator | Saturday 28 March 2026 04:04:22 +0000 (0:00:00.402) 0:00:10.365 ******** 2026-03-28 04:04:25.860556 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 04:04:25.860563 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 04:04:25.860569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-28 04:04:25.860575 | orchestrator | 2026-03-28 04:04:25.860581 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-28 04:04:25.860587 | orchestrator | Saturday 28 March 2026 04:04:24 +0000 (0:00:01.383) 0:00:11.748 ******** 2026-03-28 04:04:25.860594 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 04:04:25.860601 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 04:04:25.860612 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-28 04:04:25.860620 | orchestrator | 2026-03-28 04:04:25.860628 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-28 04:04:25.860641 | orchestrator | Saturday 28 March 2026 04:04:25 +0000 (0:00:01.842) 0:00:13.590 ******** 2026-03-28 04:04:32.538517 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:04:32.538608 | orchestrator | 2026-03-28 04:04:32.538620 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-28 04:04:32.538630 | orchestrator | Saturday 28 March 2026 04:04:26 +0000 (0:00:00.818) 0:00:14.409 ******** 2026-03-28 04:04:32.538639 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-28 04:04:32.538648 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-28 04:04:32.538656 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:04:32.538665 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:04:32.538673 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:04:32.538681 | orchestrator | 2026-03-28 04:04:32.538690 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-28 04:04:32.538698 | orchestrator | Saturday 28 March 2026 04:04:27 +0000 (0:00:00.794) 0:00:15.203 ******** 2026-03-28 04:04:32.538706 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:04:32.538714 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:04:32.538722 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:04:32.538730 | orchestrator | 2026-03-28 04:04:32.538738 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-28 04:04:32.538746 | orchestrator | Saturday 28 March 2026 04:04:27 +0000 (0:00:00.379) 0:00:15.583 ******** 2026-03-28 04:04:32.538757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1099927, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2378016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1099927, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2378016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1099927, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2378016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100112, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2620995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100112, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2620995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100112, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2620995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100003, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100003, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100003, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100114, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2638652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100114, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2638652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:32.538923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100114, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2638652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100061, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100061, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100061, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100090, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2587821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100090, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2587821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100090, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2587821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099924, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2259722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099924, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2259722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.337999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1099924, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2259722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.338010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099993, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2392998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.338079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099993, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2392998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.338099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099993, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2392998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:36.338113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100006, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100006, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100006, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2417152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100077, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2559073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100077, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2559073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100077, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2559073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100108, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2617843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100108, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2617843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100108, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2617843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099996, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2405639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099996, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2405639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099996, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2405639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100087, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2575934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:40.552807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100087, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2575934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100087, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2575934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100068, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2553928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100068, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2553928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100068, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2553928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100058, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100058, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100058, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2522953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2522953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100055, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2522953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100079, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2572672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100079, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2572672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:44.557752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100079, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2572672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100008, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2514179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100008, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2514179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100008, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2514179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100096, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2609518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100096, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2609518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100096, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2609518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100471, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3420901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100471, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3420901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100471, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3420901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100198, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2912707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100198, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2912707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100198, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2912707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:48.453833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100176, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100176, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100176, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2708416, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100299, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2949157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100299, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2949157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100299, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2949157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100128, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2643003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100128, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2643003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100128, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2643003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100443, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.331482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100443, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.331482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100443, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.331482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100307, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.313301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:53.024310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100307, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.313301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100307, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.313301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100448, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3323762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100448, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3323762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100448, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3323762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100464, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.339426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100464, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.339426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100464, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.339426, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100435, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3303015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100435, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3303015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100435, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3303015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100289, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.293129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100289, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.293129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:04:56.783910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100289, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.293129, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100191, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2739244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100191, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2739244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100285, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2914968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100191, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2739244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100285, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2914968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100177, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2728543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100285, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2914968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100177, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2728543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100294, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2933009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100294, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2933009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100177, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2728543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100456, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3383017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:00.604992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100456, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3383017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.926772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100294, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2933009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.926930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100453, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3343015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.926980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100453, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3343015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100456, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3383017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100131, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2654557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100131, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2654557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100453, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3343015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100136, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2704957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100136, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2704957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100131, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2654557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100432, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3288057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100432, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3288057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:05:04.927277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100136, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.2704957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:06:47.462484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100451, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.333031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:06:47.462624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100451, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.333031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:06:47.462641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100432, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.3288057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:06:47.462652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100451, 'dev': 107, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774663248.333031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-28 04:06:47.462661 | orchestrator | 2026-03-28 04:06:47.462672 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-28 04:06:47.462685 | orchestrator | Saturday 28 March 2026 04:05:06 +0000 (0:00:38.864) 0:00:54.447 ******** 2026-03-28 04:06:47.462699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:06:47.462762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:06:47.462782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-28 04:06:47.462796 | orchestrator | 2026-03-28 04:06:47.462809 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-28 04:06:47.462823 | orchestrator | Saturday 28 March 2026 04:05:07 +0000 (0:00:01.091) 0:00:55.539 ******** 2026-03-28 04:06:47.462836 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:06:47.462848 | orchestrator | 2026-03-28 04:06:47.462861 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-28 04:06:47.462874 | orchestrator | Saturday 28 March 2026 04:05:09 +0000 (0:00:02.196) 0:00:57.735 ******** 2026-03-28 04:06:47.462894 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:06:47.462908 | orchestrator | 2026-03-28 04:06:47.462921 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 04:06:47.462935 | orchestrator | Saturday 28 March 2026 04:05:12 +0000 (0:00:02.347) 0:01:00.083 ******** 2026-03-28 04:06:47.462945 | orchestrator | 2026-03-28 04:06:47.462953 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 04:06:47.462961 | orchestrator | Saturday 28 March 2026 04:05:12 +0000 (0:00:00.075) 0:01:00.158 ******** 2026-03-28 04:06:47.462969 | orchestrator | 2026-03-28 04:06:47.462977 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-28 04:06:47.462984 | orchestrator | Saturday 28 March 2026 04:05:12 +0000 (0:00:00.072) 0:01:00.231 ******** 2026-03-28 04:06:47.462992 | orchestrator | 2026-03-28 04:06:47.463000 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-28 04:06:47.463008 | orchestrator | Saturday 28 March 2026 04:05:12 +0000 (0:00:00.073) 0:01:00.305 ******** 2026-03-28 04:06:47.463016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:06:47.463026 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:06:47.463035 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:06:47.463057 | orchestrator | 2026-03-28 04:06:47.463067 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-28 04:06:47.463077 | orchestrator | Saturday 28 March 2026 04:05:19 +0000 (0:00:07.278) 0:01:07.583 ******** 2026-03-28 04:06:47.463086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:06:47.463094 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:06:47.463103 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-28 04:06:47.463114 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-28 04:06:47.463131 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-28 04:06:47.463141 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-28 04:06:47.463150 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:06:47.463160 | orchestrator | 2026-03-28 04:06:47.463192 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-28 04:06:47.463202 | orchestrator | Saturday 28 March 2026 04:06:10 +0000 (0:00:50.744) 0:01:58.328 ******** 2026-03-28 04:06:47.463214 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:06:47.463227 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:06:47.463240 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:06:47.463253 | orchestrator | 2026-03-28 04:06:47.463269 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-28 04:06:47.463283 | orchestrator | Saturday 28 March 2026 04:06:42 +0000 (0:00:31.705) 0:02:30.033 ******** 2026-03-28 04:06:47.463297 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:06:47.463308 | orchestrator | 2026-03-28 04:06:47.463317 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-28 04:06:47.463326 | orchestrator | Saturday 28 March 2026 04:06:44 +0000 (0:00:02.262) 0:02:32.296 ******** 2026-03-28 04:06:47.463336 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:06:47.463345 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:06:47.463354 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:06:47.463363 | orchestrator | 2026-03-28 04:06:47.463372 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-28 04:06:47.463381 | orchestrator | Saturday 28 March 2026 04:06:44 +0000 (0:00:00.315) 0:02:32.612 ******** 2026-03-28 04:06:47.463390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-28 04:06:47.463409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-28 04:06:48.120472 | orchestrator | 2026-03-28 04:06:48.120546 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-28 04:06:48.120554 | orchestrator | Saturday 28 March 2026 04:06:47 +0000 (0:00:02.583) 0:02:35.195 ******** 2026-03-28 04:06:48.120567 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:06:48.120572 | orchestrator | 2026-03-28 04:06:48.120577 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:06:48.120582 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 04:06:48.120587 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 04:06:48.120591 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 04:06:48.120595 | orchestrator | 2026-03-28 04:06:48.120599 | orchestrator | 2026-03-28 04:06:48.120603 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:06:48.120607 | orchestrator | Saturday 28 March 2026 04:06:47 +0000 (0:00:00.281) 0:02:35.476 ******** 2026-03-28 04:06:48.120611 | orchestrator | =============================================================================== 2026-03-28 04:06:48.120628 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.74s 2026-03-28 04:06:48.120632 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.86s 2026-03-28 04:06:48.120651 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.71s 2026-03-28 04:06:48.120657 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.28s 2026-03-28 04:06:48.120663 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.58s 2026-03-28 04:06:48.120669 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2026-03-28 04:06:48.120675 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-03-28 04:06:48.120682 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.20s 2026-03-28 04:06:48.120689 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.84s 2026-03-28 04:06:48.120694 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.68s 2026-03-28 04:06:48.120700 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2026-03-28 04:06:48.120706 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2026-03-28 04:06:48.120712 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-03-28 04:06:48.120718 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2026-03-28 04:06:48.120725 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.98s 2026-03-28 04:06:48.120732 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.98s 2026-03-28 04:06:48.120736 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.82s 2026-03-28 04:06:48.120740 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.79s 2026-03-28 04:06:48.120744 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2026-03-28 04:06:48.120747 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.65s 2026-03-28 04:06:48.459614 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-28 04:06:48.469350 | orchestrator | + set -e 2026-03-28 04:06:48.469438 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:06:48.469454 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:06:48.469468 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:06:48.469480 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:06:48.469491 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:06:48.469502 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 04:06:48.469513 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 04:06:48.469524 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 04:06:48.469535 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 04:06:48.469547 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 04:06:48.469558 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 04:06:48.469570 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 04:06:48.469581 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:06:48.469592 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:06:48.469605 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 04:06:48.469616 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 04:06:48.469627 | orchestrator | ++ export ARA=false 2026-03-28 04:06:48.469638 | orchestrator | ++ ARA=false 2026-03-28 04:06:48.469649 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 04:06:48.469660 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 04:06:48.469671 | orchestrator | ++ export TEMPEST=false 2026-03-28 04:06:48.469682 | orchestrator | ++ TEMPEST=false 2026-03-28 04:06:48.469693 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 04:06:48.469703 | orchestrator | ++ IS_ZUUL=true 2026-03-28 04:06:48.469714 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:06:48.469726 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:06:48.469749 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 04:06:48.469760 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 04:06:48.469771 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 04:06:48.469782 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 04:06:48.469793 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 04:06:48.469804 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 04:06:48.469815 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 04:06:48.469826 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 04:06:48.470887 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-28 04:06:48.542499 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 04:06:48.542579 | orchestrator | + osism apply clusterapi 2026-03-28 04:06:50.791026 | orchestrator | 2026-03-28 04:06:50 | INFO  | Task 64147ab6-ef69-4d16-82db-228aaf71b0eb (clusterapi) was prepared for execution. 2026-03-28 04:06:50.791111 | orchestrator | 2026-03-28 04:06:50 | INFO  | It takes a moment until task 64147ab6-ef69-4d16-82db-228aaf71b0eb (clusterapi) has been started and output is visible here. 2026-03-28 04:07:58.762551 | orchestrator | 2026-03-28 04:07:58.762659 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-28 04:07:58.762677 | orchestrator | 2026-03-28 04:07:58.762704 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-28 04:07:58.762717 | orchestrator | Saturday 28 March 2026 04:06:55 +0000 (0:00:00.259) 0:00:00.259 ******** 2026-03-28 04:07:58.762728 | orchestrator | included: cert_manager for testbed-manager 2026-03-28 04:07:58.762740 | orchestrator | 2026-03-28 04:07:58.762751 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-28 04:07:58.762763 | orchestrator | Saturday 28 March 2026 04:06:55 +0000 (0:00:00.247) 0:00:00.507 ******** 2026-03-28 04:07:58.762774 | orchestrator | changed: [testbed-manager] 2026-03-28 04:07:58.762785 | orchestrator | 2026-03-28 04:07:58.762797 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-28 04:07:58.762808 | orchestrator | Saturday 28 March 2026 04:07:01 +0000 (0:00:05.464) 0:00:05.972 ******** 2026-03-28 04:07:58.762819 | orchestrator | changed: [testbed-manager] 2026-03-28 04:07:58.762830 | orchestrator | 2026-03-28 04:07:58.762841 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-28 04:07:58.762852 | orchestrator | 2026-03-28 04:07:58.762863 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-28 04:07:58.762874 | orchestrator | Saturday 28 March 2026 04:07:34 +0000 (0:00:33.729) 0:00:39.701 ******** 2026-03-28 04:07:58.762885 | orchestrator | ok: [testbed-manager] 2026-03-28 04:07:58.762897 | orchestrator | 2026-03-28 04:07:58.762908 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-28 04:07:58.762938 | orchestrator | Saturday 28 March 2026 04:07:36 +0000 (0:00:01.228) 0:00:40.930 ******** 2026-03-28 04:07:58.762950 | orchestrator | ok: [testbed-manager] 2026-03-28 04:07:58.762961 | orchestrator | 2026-03-28 04:07:58.762972 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-28 04:07:58.762983 | orchestrator | Saturday 28 March 2026 04:07:36 +0000 (0:00:00.155) 0:00:41.085 ******** 2026-03-28 04:07:58.762995 | orchestrator | ok: [testbed-manager] 2026-03-28 04:07:58.763006 | orchestrator | 2026-03-28 04:07:58.763017 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-28 04:07:58.763028 | orchestrator | Saturday 28 March 2026 04:07:55 +0000 (0:00:19.720) 0:01:00.806 ******** 2026-03-28 04:07:58.763039 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:07:58.763050 | orchestrator | 2026-03-28 04:07:58.763061 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-28 04:07:58.763072 | orchestrator | Saturday 28 March 2026 04:07:56 +0000 (0:00:00.146) 0:01:00.952 ******** 2026-03-28 04:07:58.763083 | orchestrator | changed: [testbed-manager] 2026-03-28 04:07:58.763094 | orchestrator | 2026-03-28 04:07:58.763108 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:07:58.763123 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:07:58.763136 | orchestrator | 2026-03-28 04:07:58.763150 | orchestrator | 2026-03-28 04:07:58.763163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:07:58.763198 | orchestrator | Saturday 28 March 2026 04:07:58 +0000 (0:00:02.262) 0:01:03.215 ******** 2026-03-28 04:07:58.763211 | orchestrator | =============================================================================== 2026-03-28 04:07:58.763223 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 33.73s 2026-03-28 04:07:58.763290 | orchestrator | Initialize the CAPI management cluster --------------------------------- 19.72s 2026-03-28 04:07:58.763305 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.46s 2026-03-28 04:07:58.763321 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.26s 2026-03-28 04:07:58.763340 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.23s 2026-03-28 04:07:58.763357 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-03-28 04:07:58.763373 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-03-28 04:07:58.763393 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-03-28 04:07:59.106525 | orchestrator | + osism apply magnum 2026-03-28 04:08:01.384093 | orchestrator | 2026-03-28 04:08:01 | INFO  | Task 97e38536-8279-42c7-a0d2-52a7158ae43f (magnum) was prepared for execution. 2026-03-28 04:08:01.384283 | orchestrator | 2026-03-28 04:08:01 | INFO  | It takes a moment until task 97e38536-8279-42c7-a0d2-52a7158ae43f (magnum) has been started and output is visible here. 2026-03-28 04:08:45.477828 | orchestrator | 2026-03-28 04:08:45.477908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:08:45.477918 | orchestrator | 2026-03-28 04:08:45.477925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:08:45.477932 | orchestrator | Saturday 28 March 2026 04:08:06 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-03-28 04:08:45.477939 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:08:45.477946 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:08:45.477952 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:08:45.477958 | orchestrator | 2026-03-28 04:08:45.477964 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:08:45.477970 | orchestrator | Saturday 28 March 2026 04:08:06 +0000 (0:00:00.368) 0:00:00.645 ******** 2026-03-28 04:08:45.477976 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-28 04:08:45.477982 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-28 04:08:45.477988 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-28 04:08:45.477994 | orchestrator | 2026-03-28 04:08:45.478000 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-28 04:08:45.478006 | orchestrator | 2026-03-28 04:08:45.478059 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 04:08:45.478070 | orchestrator | Saturday 28 March 2026 04:08:06 +0000 (0:00:00.516) 0:00:01.161 ******** 2026-03-28 04:08:45.478080 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:08:45.478091 | orchestrator | 2026-03-28 04:08:45.478101 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-28 04:08:45.478111 | orchestrator | Saturday 28 March 2026 04:08:07 +0000 (0:00:00.621) 0:00:01.782 ******** 2026-03-28 04:08:45.478122 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-28 04:08:45.478128 | orchestrator | 2026-03-28 04:08:45.478133 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-28 04:08:45.478139 | orchestrator | Saturday 28 March 2026 04:08:11 +0000 (0:00:03.568) 0:00:05.351 ******** 2026-03-28 04:08:45.478145 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-28 04:08:45.478151 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-28 04:08:45.478157 | orchestrator | 2026-03-28 04:08:45.478163 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-28 04:08:45.478169 | orchestrator | Saturday 28 March 2026 04:08:18 +0000 (0:00:06.881) 0:00:12.232 ******** 2026-03-28 04:08:45.478208 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 04:08:45.478277 | orchestrator | 2026-03-28 04:08:45.478285 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-28 04:08:45.478302 | orchestrator | Saturday 28 March 2026 04:08:21 +0000 (0:00:03.439) 0:00:15.671 ******** 2026-03-28 04:08:45.478327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 04:08:45.478334 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-28 04:08:45.478340 | orchestrator | 2026-03-28 04:08:45.478346 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-28 04:08:45.478352 | orchestrator | Saturday 28 March 2026 04:08:25 +0000 (0:00:04.149) 0:00:19.821 ******** 2026-03-28 04:08:45.478358 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 04:08:45.478364 | orchestrator | 2026-03-28 04:08:45.478401 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-28 04:08:45.478424 | orchestrator | Saturday 28 March 2026 04:08:28 +0000 (0:00:03.283) 0:00:23.104 ******** 2026-03-28 04:08:45.478434 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-28 04:08:45.478444 | orchestrator | 2026-03-28 04:08:45.478453 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-28 04:08:45.478464 | orchestrator | Saturday 28 March 2026 04:08:32 +0000 (0:00:04.053) 0:00:27.158 ******** 2026-03-28 04:08:45.478474 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:08:45.478483 | orchestrator | 2026-03-28 04:08:45.478494 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-28 04:08:45.478504 | orchestrator | Saturday 28 March 2026 04:08:36 +0000 (0:00:03.410) 0:00:30.569 ******** 2026-03-28 04:08:45.478514 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:08:45.478524 | orchestrator | 2026-03-28 04:08:45.478535 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-28 04:08:45.478546 | orchestrator | Saturday 28 March 2026 04:08:40 +0000 (0:00:03.833) 0:00:34.402 ******** 2026-03-28 04:08:45.478557 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:08:45.478568 | orchestrator | 2026-03-28 04:08:45.478578 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-28 04:08:45.478585 | orchestrator | Saturday 28 March 2026 04:08:43 +0000 (0:00:03.500) 0:00:37.903 ******** 2026-03-28 04:08:45.478612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:45.478622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:45.478663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:45.478673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:45.478682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:45.478698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:53.186358 | orchestrator | 2026-03-28 04:08:53.186449 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-28 04:08:53.186465 | orchestrator | Saturday 28 March 2026 04:08:45 +0000 (0:00:01.769) 0:00:39.672 ******** 2026-03-28 04:08:53.186477 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:08:53.186490 | orchestrator | 2026-03-28 04:08:53.186501 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-28 04:08:53.186512 | orchestrator | Saturday 28 March 2026 04:08:45 +0000 (0:00:00.142) 0:00:39.814 ******** 2026-03-28 04:08:53.186521 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:08:53.186527 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:08:53.186551 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:08:53.186558 | orchestrator | 2026-03-28 04:08:53.186565 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-28 04:08:53.186571 | orchestrator | Saturday 28 March 2026 04:08:45 +0000 (0:00:00.329) 0:00:40.143 ******** 2026-03-28 04:08:53.186577 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:08:53.186584 | orchestrator | 2026-03-28 04:08:53.186590 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-28 04:08:53.186597 | orchestrator | Saturday 28 March 2026 04:08:46 +0000 (0:00:00.894) 0:00:41.038 ******** 2026-03-28 04:08:53.186605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:53.186627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:53.186634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:53.186655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:53.186671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:53.186679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:53.186685 | orchestrator | 2026-03-28 04:08:53.186695 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-28 04:08:53.186702 | orchestrator | Saturday 28 March 2026 04:08:49 +0000 (0:00:02.518) 0:00:43.556 ******** 2026-03-28 04:08:53.186708 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:08:53.186715 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:08:53.186722 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:08:53.186728 | orchestrator | 2026-03-28 04:08:53.186734 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 04:08:53.186740 | orchestrator | Saturday 28 March 2026 04:08:49 +0000 (0:00:00.555) 0:00:44.112 ******** 2026-03-28 04:08:53.186747 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:08:53.186754 | orchestrator | 2026-03-28 04:08:53.186761 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-28 04:08:53.186767 | orchestrator | Saturday 28 March 2026 04:08:50 +0000 (0:00:00.637) 0:00:44.750 ******** 2026-03-28 04:08:53.186774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:53.186797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:54.091076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:54.091236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:54.091264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:54.091281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:08:54.091297 | orchestrator | 2026-03-28 04:08:54.091314 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-28 04:08:54.091332 | orchestrator | Saturday 28 March 2026 04:08:53 +0000 (0:00:02.635) 0:00:47.385 ******** 2026-03-28 04:08:54.091395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:54.091412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:54.091429 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:08:54.091455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:54.091474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:54.091490 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:08:54.091505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:54.091540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:57.846660 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:08:57.846750 | orchestrator | 2026-03-28 04:08:57.846764 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-28 04:08:57.846774 | orchestrator | Saturday 28 March 2026 04:08:54 +0000 (0:00:00.902) 0:00:48.288 ******** 2026-03-28 04:08:57.846786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:57.846825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:57.846841 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:08:57.846856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:57.846895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:57.846909 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:08:57.846957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:08:57.846970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:08:57.846983 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:08:57.846995 | orchestrator | 2026-03-28 04:08:57.847008 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-28 04:08:57.847026 | orchestrator | Saturday 28 March 2026 04:08:54 +0000 (0:00:00.917) 0:00:49.205 ******** 2026-03-28 04:08:57.847040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:57.847055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:08:57.847084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:04.300523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300649 | orchestrator | 2026-03-28 04:09:04.300655 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-28 04:09:04.300661 | orchestrator | Saturday 28 March 2026 04:08:57 +0000 (0:00:02.842) 0:00:52.048 ******** 2026-03-28 04:09:04.300665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:04.300682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:04.300687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:04.300694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:04.300710 | orchestrator | 2026-03-28 04:09:04.300715 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-28 04:09:04.300719 | orchestrator | Saturday 28 March 2026 04:09:03 +0000 (0:00:05.684) 0:00:57.733 ******** 2026-03-28 04:09:04.300728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:09:06.359795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:09:06.359906 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:09:06.359943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:09:06.359982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:09:06.359995 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:09:06.360008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-28 04:09:06.360039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:09:06.360051 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:09:06.360063 | orchestrator | 2026-03-28 04:09:06.360075 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-28 04:09:06.360087 | orchestrator | Saturday 28 March 2026 04:09:04 +0000 (0:00:00.775) 0:00:58.508 ******** 2026-03-28 04:09:06.360105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:06.360126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:06.360138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-28 04:09:06.360150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:09:06.360234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:10:00.217587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 04:10:00.217743 | orchestrator | 2026-03-28 04:10:00.217768 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 04:10:00.217785 | orchestrator | Saturday 28 March 2026 04:09:06 +0000 (0:00:02.049) 0:01:00.558 ******** 2026-03-28 04:10:00.217798 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:10:00.217814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:10:00.217827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:10:00.217840 | orchestrator | 2026-03-28 04:10:00.217853 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-28 04:10:00.217866 | orchestrator | Saturday 28 March 2026 04:09:06 +0000 (0:00:00.542) 0:01:01.100 ******** 2026-03-28 04:10:00.217881 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:10:00.217895 | orchestrator | 2026-03-28 04:10:00.217908 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-28 04:10:00.217921 | orchestrator | Saturday 28 March 2026 04:09:09 +0000 (0:00:02.201) 0:01:03.301 ******** 2026-03-28 04:10:00.217934 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:10:00.217947 | orchestrator | 2026-03-28 04:10:00.217961 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-28 04:10:00.217974 | orchestrator | Saturday 28 March 2026 04:09:11 +0000 (0:00:02.317) 0:01:05.619 ******** 2026-03-28 04:10:00.217988 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:10:00.218001 | orchestrator | 2026-03-28 04:10:00.218085 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 04:10:00.218105 | orchestrator | Saturday 28 March 2026 04:09:28 +0000 (0:00:17.054) 0:01:22.673 ******** 2026-03-28 04:10:00.218120 | orchestrator | 2026-03-28 04:10:00.218133 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 04:10:00.218146 | orchestrator | Saturday 28 March 2026 04:09:28 +0000 (0:00:00.088) 0:01:22.762 ******** 2026-03-28 04:10:00.218193 | orchestrator | 2026-03-28 04:10:00.218207 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 04:10:00.218222 | orchestrator | Saturday 28 March 2026 04:09:28 +0000 (0:00:00.079) 0:01:22.842 ******** 2026-03-28 04:10:00.218235 | orchestrator | 2026-03-28 04:10:00.218249 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-28 04:10:00.218263 | orchestrator | Saturday 28 March 2026 04:09:28 +0000 (0:00:00.075) 0:01:22.917 ******** 2026-03-28 04:10:00.218277 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:10:00.218290 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:10:00.218303 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:10:00.218317 | orchestrator | 2026-03-28 04:10:00.218331 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-28 04:10:00.218345 | orchestrator | Saturday 28 March 2026 04:09:43 +0000 (0:00:15.122) 0:01:38.039 ******** 2026-03-28 04:10:00.218358 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:10:00.218373 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:10:00.218385 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:10:00.218399 | orchestrator | 2026-03-28 04:10:00.218413 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:10:00.218428 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:10:00.218444 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 04:10:00.218457 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 04:10:00.218470 | orchestrator | 2026-03-28 04:10:00.218484 | orchestrator | 2026-03-28 04:10:00.218497 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:10:00.218526 | orchestrator | Saturday 28 March 2026 04:09:59 +0000 (0:00:15.968) 0:01:54.008 ******** 2026-03-28 04:10:00.218540 | orchestrator | =============================================================================== 2026-03-28 04:10:00.218554 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.05s 2026-03-28 04:10:00.218567 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.97s 2026-03-28 04:10:00.218580 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.12s 2026-03-28 04:10:00.218594 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.88s 2026-03-28 04:10:00.218609 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.68s 2026-03-28 04:10:00.218622 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.15s 2026-03-28 04:10:00.218636 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.05s 2026-03-28 04:10:00.218676 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.83s 2026-03-28 04:10:00.218692 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.57s 2026-03-28 04:10:00.218707 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.50s 2026-03-28 04:10:00.218720 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.44s 2026-03-28 04:10:00.218734 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.41s 2026-03-28 04:10:00.218748 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.28s 2026-03-28 04:10:00.218762 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.84s 2026-03-28 04:10:00.218776 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.64s 2026-03-28 04:10:00.218803 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.52s 2026-03-28 04:10:00.218818 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.32s 2026-03-28 04:10:00.218831 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.20s 2026-03-28 04:10:00.218845 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.05s 2026-03-28 04:10:00.218859 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.77s 2026-03-28 04:10:01.016927 | orchestrator | ok: Runtime: 1:45:44.488789 2026-03-28 04:10:01.256953 | 2026-03-28 04:10:01.257100 | TASK [Deploy in a nutshell] 2026-03-28 04:10:01.794661 | orchestrator | skipping: Conditional result was False 2026-03-28 04:10:01.819794 | 2026-03-28 04:10:01.820000 | TASK [Bootstrap services] 2026-03-28 04:10:02.550118 | orchestrator | 2026-03-28 04:10:02.550306 | orchestrator | # BOOTSTRAP 2026-03-28 04:10:02.550321 | orchestrator | 2026-03-28 04:10:02.550330 | orchestrator | + set -e 2026-03-28 04:10:02.550338 | orchestrator | + echo 2026-03-28 04:10:02.550347 | orchestrator | + echo '# BOOTSTRAP' 2026-03-28 04:10:02.550359 | orchestrator | + echo 2026-03-28 04:10:02.550389 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-28 04:10:02.557062 | orchestrator | + set -e 2026-03-28 04:10:02.557185 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-28 04:10:04.856190 | orchestrator | 2026-03-28 04:10:04 | INFO  | It takes a moment until task 3de8ab7a-0f68-407f-af71-bebc0777baf5 (flavor-manager) has been started and output is visible here. 2026-03-28 04:10:13.213594 | orchestrator | 2026-03-28 04:10:08 | INFO  | Flavor SCS-1L-1 created 2026-03-28 04:10:13.213715 | orchestrator | 2026-03-28 04:10:08 | INFO  | Flavor SCS-1L-1-5 created 2026-03-28 04:10:13.213730 | orchestrator | 2026-03-28 04:10:08 | INFO  | Flavor SCS-1V-2 created 2026-03-28 04:10:13.213739 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-1V-2-5 created 2026-03-28 04:10:13.213748 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-1V-4 created 2026-03-28 04:10:13.213756 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-1V-4-10 created 2026-03-28 04:10:13.213764 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-1V-8 created 2026-03-28 04:10:13.213773 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-1V-8-20 created 2026-03-28 04:10:13.213792 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-2V-4 created 2026-03-28 04:10:13.213801 | orchestrator | 2026-03-28 04:10:09 | INFO  | Flavor SCS-2V-4-10 created 2026-03-28 04:10:13.213809 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-2V-8 created 2026-03-28 04:10:13.213817 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-2V-8-20 created 2026-03-28 04:10:13.213825 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-2V-16 created 2026-03-28 04:10:13.213833 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-2V-16-50 created 2026-03-28 04:10:13.213841 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-4V-8 created 2026-03-28 04:10:13.213848 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-4V-8-20 created 2026-03-28 04:10:13.213856 | orchestrator | 2026-03-28 04:10:10 | INFO  | Flavor SCS-4V-16 created 2026-03-28 04:10:13.213864 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-4V-16-50 created 2026-03-28 04:10:13.213872 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-4V-32 created 2026-03-28 04:10:13.213880 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-4V-32-100 created 2026-03-28 04:10:13.213888 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-8V-16 created 2026-03-28 04:10:13.213896 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-8V-16-50 created 2026-03-28 04:10:13.213904 | orchestrator | 2026-03-28 04:10:11 | INFO  | Flavor SCS-8V-32 created 2026-03-28 04:10:13.213912 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-8V-32-100 created 2026-03-28 04:10:13.213920 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-16V-32 created 2026-03-28 04:10:13.213928 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-16V-32-100 created 2026-03-28 04:10:13.213936 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-2V-4-20s created 2026-03-28 04:10:13.213943 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-4V-8-50s created 2026-03-28 04:10:13.213951 | orchestrator | 2026-03-28 04:10:12 | INFO  | Flavor SCS-8V-32-100s created 2026-03-28 04:10:15.749460 | orchestrator | 2026-03-28 04:10:15 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-28 04:10:25.878187 | orchestrator | 2026-03-28 04:10:25 | INFO  | Task 1e09acaf-9155-4385-a3ad-184976f8f3a2 (bootstrap-basic) was prepared for execution. 2026-03-28 04:10:25.878554 | orchestrator | 2026-03-28 04:10:25 | INFO  | It takes a moment until task 1e09acaf-9155-4385-a3ad-184976f8f3a2 (bootstrap-basic) has been started and output is visible here. 2026-03-28 04:11:13.094704 | orchestrator | 2026-03-28 04:11:13.094803 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-28 04:11:13.094816 | orchestrator | 2026-03-28 04:11:13.094824 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 04:11:13.094832 | orchestrator | Saturday 28 March 2026 04:10:31 +0000 (0:00:00.086) 0:00:00.086 ******** 2026-03-28 04:11:13.094840 | orchestrator | ok: [localhost] 2026-03-28 04:11:13.094889 | orchestrator | 2026-03-28 04:11:13.094898 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-28 04:11:13.094905 | orchestrator | Saturday 28 March 2026 04:10:33 +0000 (0:00:02.236) 0:00:02.323 ******** 2026-03-28 04:11:13.094913 | orchestrator | ok: [localhost] 2026-03-28 04:11:13.094920 | orchestrator | 2026-03-28 04:11:13.094928 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-28 04:11:13.094935 | orchestrator | Saturday 28 March 2026 04:10:40 +0000 (0:00:07.691) 0:00:10.015 ******** 2026-03-28 04:11:13.094943 | orchestrator | changed: [localhost] 2026-03-28 04:11:13.094950 | orchestrator | 2026-03-28 04:11:13.094958 | orchestrator | TASK [Create public network] *************************************************** 2026-03-28 04:11:13.094966 | orchestrator | Saturday 28 March 2026 04:10:47 +0000 (0:00:06.723) 0:00:16.739 ******** 2026-03-28 04:11:13.094973 | orchestrator | changed: [localhost] 2026-03-28 04:11:13.094980 | orchestrator | 2026-03-28 04:11:13.094987 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-28 04:11:13.094995 | orchestrator | Saturday 28 March 2026 04:10:53 +0000 (0:00:05.711) 0:00:22.451 ******** 2026-03-28 04:11:13.095006 | orchestrator | changed: [localhost] 2026-03-28 04:11:13.095013 | orchestrator | 2026-03-28 04:11:13.095021 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-28 04:11:13.095028 | orchestrator | Saturday 28 March 2026 04:11:00 +0000 (0:00:06.722) 0:00:29.174 ******** 2026-03-28 04:11:13.095035 | orchestrator | changed: [localhost] 2026-03-28 04:11:13.095042 | orchestrator | 2026-03-28 04:11:13.095050 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-28 04:11:13.095057 | orchestrator | Saturday 28 March 2026 04:11:04 +0000 (0:00:04.642) 0:00:33.816 ******** 2026-03-28 04:11:13.095064 | orchestrator | changed: [localhost] 2026-03-28 04:11:13.095071 | orchestrator | 2026-03-28 04:11:13.095079 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-28 04:11:13.095094 | orchestrator | Saturday 28 March 2026 04:11:08 +0000 (0:00:04.092) 0:00:37.909 ******** 2026-03-28 04:11:13.095101 | orchestrator | ok: [localhost] 2026-03-28 04:11:13.095108 | orchestrator | 2026-03-28 04:11:13.095116 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:11:13.095123 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:11:13.095132 | orchestrator | 2026-03-28 04:11:13.095139 | orchestrator | 2026-03-28 04:11:13.095147 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:11:13.095154 | orchestrator | Saturday 28 March 2026 04:11:12 +0000 (0:00:03.917) 0:00:41.827 ******** 2026-03-28 04:11:13.095161 | orchestrator | =============================================================================== 2026-03-28 04:11:13.095168 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.69s 2026-03-28 04:11:13.095176 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.72s 2026-03-28 04:11:13.095183 | orchestrator | Set public network to default ------------------------------------------- 6.72s 2026-03-28 04:11:13.095190 | orchestrator | Create public network --------------------------------------------------- 5.71s 2026-03-28 04:11:13.095216 | orchestrator | Create public subnet ---------------------------------------------------- 4.64s 2026-03-28 04:11:13.095224 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.09s 2026-03-28 04:11:13.095232 | orchestrator | Create manager role ----------------------------------------------------- 3.92s 2026-03-28 04:11:13.095241 | orchestrator | Gathering Facts --------------------------------------------------------- 2.24s 2026-03-28 04:11:15.746662 | orchestrator | 2026-03-28 04:11:15 | INFO  | It takes a moment until task 2cf994eb-ad0d-487f-85d1-e0cb9ce837f4 (image-manager) has been started and output is visible here. 2026-03-28 04:12:00.666216 | orchestrator | 2026-03-28 04:11:18 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-28 04:12:00.666316 | orchestrator | 2026-03-28 04:11:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-28 04:12:00.666327 | orchestrator | 2026-03-28 04:11:19 | INFO  | Importing image Cirros 0.6.2 2026-03-28 04:12:00.666335 | orchestrator | 2026-03-28 04:11:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 04:12:00.666343 | orchestrator | 2026-03-28 04:11:21 | INFO  | Waiting for image to leave queued state... 2026-03-28 04:12:00.666351 | orchestrator | 2026-03-28 04:11:23 | INFO  | Waiting for import to complete... 2026-03-28 04:12:00.666358 | orchestrator | 2026-03-28 04:11:33 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-28 04:12:00.666366 | orchestrator | 2026-03-28 04:11:33 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-28 04:12:00.666373 | orchestrator | 2026-03-28 04:11:33 | INFO  | Setting internal_version = 0.6.2 2026-03-28 04:12:00.666380 | orchestrator | 2026-03-28 04:11:33 | INFO  | Setting image_original_user = cirros 2026-03-28 04:12:00.666388 | orchestrator | 2026-03-28 04:11:33 | INFO  | Adding tag os:cirros 2026-03-28 04:12:00.666394 | orchestrator | 2026-03-28 04:11:34 | INFO  | Setting property architecture: x86_64 2026-03-28 04:12:00.666401 | orchestrator | 2026-03-28 04:11:34 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 04:12:00.666408 | orchestrator | 2026-03-28 04:11:34 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 04:12:00.666415 | orchestrator | 2026-03-28 04:11:34 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 04:12:00.666421 | orchestrator | 2026-03-28 04:11:35 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 04:12:00.666428 | orchestrator | 2026-03-28 04:11:35 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 04:12:00.666435 | orchestrator | 2026-03-28 04:11:35 | INFO  | Setting property os_distro: cirros 2026-03-28 04:12:00.666442 | orchestrator | 2026-03-28 04:11:35 | INFO  | Setting property os_purpose: minimal 2026-03-28 04:12:00.666448 | orchestrator | 2026-03-28 04:11:36 | INFO  | Setting property replace_frequency: never 2026-03-28 04:12:00.666455 | orchestrator | 2026-03-28 04:11:36 | INFO  | Setting property uuid_validity: none 2026-03-28 04:12:00.666461 | orchestrator | 2026-03-28 04:11:36 | INFO  | Setting property provided_until: none 2026-03-28 04:12:00.666468 | orchestrator | 2026-03-28 04:11:37 | INFO  | Setting property image_description: Cirros 2026-03-28 04:12:00.666475 | orchestrator | 2026-03-28 04:11:37 | INFO  | Setting property image_name: Cirros 2026-03-28 04:12:00.666481 | orchestrator | 2026-03-28 04:11:37 | INFO  | Setting property internal_version: 0.6.2 2026-03-28 04:12:00.666488 | orchestrator | 2026-03-28 04:11:37 | INFO  | Setting property image_original_user: cirros 2026-03-28 04:12:00.666538 | orchestrator | 2026-03-28 04:11:38 | INFO  | Setting property os_version: 0.6.2 2026-03-28 04:12:00.666558 | orchestrator | 2026-03-28 04:11:38 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 04:12:00.666567 | orchestrator | 2026-03-28 04:11:38 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-28 04:12:00.666573 | orchestrator | 2026-03-28 04:11:39 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-28 04:12:00.666580 | orchestrator | 2026-03-28 04:11:39 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-28 04:12:00.666587 | orchestrator | 2026-03-28 04:11:39 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-28 04:12:00.666593 | orchestrator | 2026-03-28 04:11:39 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-28 04:12:00.666604 | orchestrator | 2026-03-28 04:11:39 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-28 04:12:00.666611 | orchestrator | 2026-03-28 04:11:39 | INFO  | Importing image Cirros 0.6.3 2026-03-28 04:12:00.666617 | orchestrator | 2026-03-28 04:11:39 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 04:12:00.666624 | orchestrator | 2026-03-28 04:11:40 | INFO  | Waiting for image to leave queued state... 2026-03-28 04:12:00.666630 | orchestrator | 2026-03-28 04:11:43 | INFO  | Waiting for import to complete... 2026-03-28 04:12:00.666651 | orchestrator | 2026-03-28 04:11:53 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-28 04:12:00.666658 | orchestrator | 2026-03-28 04:11:54 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-28 04:12:00.666686 | orchestrator | 2026-03-28 04:11:54 | INFO  | Setting internal_version = 0.6.3 2026-03-28 04:12:00.666695 | orchestrator | 2026-03-28 04:11:54 | INFO  | Setting image_original_user = cirros 2026-03-28 04:12:00.666702 | orchestrator | 2026-03-28 04:11:54 | INFO  | Adding tag os:cirros 2026-03-28 04:12:00.666708 | orchestrator | 2026-03-28 04:11:54 | INFO  | Setting property architecture: x86_64 2026-03-28 04:12:00.666715 | orchestrator | 2026-03-28 04:11:54 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 04:12:00.666721 | orchestrator | 2026-03-28 04:11:54 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 04:12:00.666728 | orchestrator | 2026-03-28 04:11:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 04:12:00.666734 | orchestrator | 2026-03-28 04:11:55 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 04:12:00.666741 | orchestrator | 2026-03-28 04:11:55 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 04:12:00.666748 | orchestrator | 2026-03-28 04:11:56 | INFO  | Setting property os_distro: cirros 2026-03-28 04:12:00.666754 | orchestrator | 2026-03-28 04:11:56 | INFO  | Setting property os_purpose: minimal 2026-03-28 04:12:00.666761 | orchestrator | 2026-03-28 04:11:56 | INFO  | Setting property replace_frequency: never 2026-03-28 04:12:00.666768 | orchestrator | 2026-03-28 04:11:56 | INFO  | Setting property uuid_validity: none 2026-03-28 04:12:00.666774 | orchestrator | 2026-03-28 04:11:57 | INFO  | Setting property provided_until: none 2026-03-28 04:12:00.666781 | orchestrator | 2026-03-28 04:11:57 | INFO  | Setting property image_description: Cirros 2026-03-28 04:12:00.666788 | orchestrator | 2026-03-28 04:11:57 | INFO  | Setting property image_name: Cirros 2026-03-28 04:12:00.666794 | orchestrator | 2026-03-28 04:11:58 | INFO  | Setting property internal_version: 0.6.3 2026-03-28 04:12:00.666807 | orchestrator | 2026-03-28 04:11:58 | INFO  | Setting property image_original_user: cirros 2026-03-28 04:12:00.666814 | orchestrator | 2026-03-28 04:11:58 | INFO  | Setting property os_version: 0.6.3 2026-03-28 04:12:00.666821 | orchestrator | 2026-03-28 04:11:58 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 04:12:00.666827 | orchestrator | 2026-03-28 04:11:59 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-28 04:12:00.666834 | orchestrator | 2026-03-28 04:11:59 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-28 04:12:00.666849 | orchestrator | 2026-03-28 04:11:59 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-28 04:12:00.666856 | orchestrator | 2026-03-28 04:11:59 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-28 04:12:01.046223 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-28 04:12:03.495111 | orchestrator | 2026-03-28 04:12:03 | INFO  | date: 2026-03-28 2026-03-28 04:12:03.495209 | orchestrator | 2026-03-28 04:12:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-28 04:12:03.495249 | orchestrator | 2026-03-28 04:12:03 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-28 04:12:03.495264 | orchestrator | 2026-03-28 04:12:03 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2.CHECKSUM 2026-03-28 04:12:03.687315 | orchestrator | 2026-03-28 04:12:03 | INFO  | checksum: d8129f2399256e335fa58752e7bcbe178527a1e3d0a6709e3e9c03f99848308a 2026-03-28 04:12:03.785419 | orchestrator | 2026-03-28 04:12:03 | INFO  | It takes a moment until task 0f588760-4f35-4b7d-b24f-4a185ea5bacb (image-manager) has been started and output is visible here. 2026-03-28 04:13:16.970276 | orchestrator | 2026-03-28 04:12:06 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-28' 2026-03-28 04:13:16.970374 | orchestrator | 2026-03-28 04:12:06 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2: 200 2026-03-28 04:13:16.970387 | orchestrator | 2026-03-28 04:12:06 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-28 2026-03-28 04:13:16.970396 | orchestrator | 2026-03-28 04:12:06 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-28 04:13:16.970425 | orchestrator | 2026-03-28 04:12:07 | INFO  | Waiting for image to leave queued state... 2026-03-28 04:13:16.970433 | orchestrator | 2026-03-28 04:12:09 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970441 | orchestrator | 2026-03-28 04:12:20 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970449 | orchestrator | 2026-03-28 04:12:30 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970456 | orchestrator | 2026-03-28 04:12:40 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970465 | orchestrator | 2026-03-28 04:12:50 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970473 | orchestrator | 2026-03-28 04:13:00 | INFO  | Waiting for import to complete... 2026-03-28 04:13:16.970480 | orchestrator | 2026-03-28 04:13:10 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-28' successfully completed, reloading images 2026-03-28 04:13:16.970489 | orchestrator | 2026-03-28 04:13:11 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-28 04:13:16.970517 | orchestrator | 2026-03-28 04:13:11 | INFO  | Setting internal_version = 2026-03-28 2026-03-28 04:13:16.970525 | orchestrator | 2026-03-28 04:13:11 | INFO  | Setting image_original_user = ubuntu 2026-03-28 04:13:16.970533 | orchestrator | 2026-03-28 04:13:11 | INFO  | Adding tag amphora 2026-03-28 04:13:16.970540 | orchestrator | 2026-03-28 04:13:11 | INFO  | Adding tag os:ubuntu 2026-03-28 04:13:16.970547 | orchestrator | 2026-03-28 04:13:11 | INFO  | Setting property architecture: x86_64 2026-03-28 04:13:16.970554 | orchestrator | 2026-03-28 04:13:12 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 04:13:16.970561 | orchestrator | 2026-03-28 04:13:12 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 04:13:16.970568 | orchestrator | 2026-03-28 04:13:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 04:13:16.970576 | orchestrator | 2026-03-28 04:13:12 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 04:13:16.970583 | orchestrator | 2026-03-28 04:13:13 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 04:13:16.970595 | orchestrator | 2026-03-28 04:13:13 | INFO  | Setting property os_distro: ubuntu 2026-03-28 04:13:16.970608 | orchestrator | 2026-03-28 04:13:13 | INFO  | Setting property replace_frequency: quarterly 2026-03-28 04:13:16.970620 | orchestrator | 2026-03-28 04:13:13 | INFO  | Setting property uuid_validity: last-1 2026-03-28 04:13:16.970634 | orchestrator | 2026-03-28 04:13:14 | INFO  | Setting property provided_until: none 2026-03-28 04:13:16.970647 | orchestrator | 2026-03-28 04:13:14 | INFO  | Setting property os_purpose: network 2026-03-28 04:13:16.970674 | orchestrator | 2026-03-28 04:13:14 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-28 04:13:16.970688 | orchestrator | 2026-03-28 04:13:14 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-28 04:13:16.970701 | orchestrator | 2026-03-28 04:13:15 | INFO  | Setting property internal_version: 2026-03-28 2026-03-28 04:13:16.970714 | orchestrator | 2026-03-28 04:13:15 | INFO  | Setting property image_original_user: ubuntu 2026-03-28 04:13:16.970727 | orchestrator | 2026-03-28 04:13:15 | INFO  | Setting property os_version: 2026-03-28 2026-03-28 04:13:16.970740 | orchestrator | 2026-03-28 04:13:16 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-28 04:13:16.970753 | orchestrator | 2026-03-28 04:13:16 | INFO  | Setting property image_build_date: 2026-03-28 2026-03-28 04:13:16.970766 | orchestrator | 2026-03-28 04:13:16 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-28 04:13:16.970779 | orchestrator | 2026-03-28 04:13:16 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-28 04:13:16.970811 | orchestrator | 2026-03-28 04:13:16 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-28 04:13:16.970820 | orchestrator | 2026-03-28 04:13:16 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-28 04:13:16.970829 | orchestrator | 2026-03-28 04:13:16 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-28 04:13:16.970837 | orchestrator | 2026-03-28 04:13:16 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-28 04:13:17.498106 | orchestrator | ok: Runtime: 0:03:15.211072 2026-03-28 04:13:17.517016 | 2026-03-28 04:13:17.517156 | TASK [Run checks] 2026-03-28 04:13:18.199362 | orchestrator | + set -e 2026-03-28 04:13:18.199509 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:13:18.199520 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:13:18.199529 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:13:18.199535 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:13:18.199539 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:13:18.199546 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 04:13:18.200951 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:13:18.206972 | orchestrator | 2026-03-28 04:13:18.207041 | orchestrator | # CHECK 2026-03-28 04:13:18.207047 | orchestrator | 2026-03-28 04:13:18.207051 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:13:18.207058 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:13:18.207063 | orchestrator | + echo 2026-03-28 04:13:18.207067 | orchestrator | + echo '# CHECK' 2026-03-28 04:13:18.207071 | orchestrator | + echo 2026-03-28 04:13:18.207077 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 04:13:18.207651 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 04:13:18.278215 | orchestrator | 2026-03-28 04:13:18.278310 | orchestrator | ## Containers @ testbed-manager 2026-03-28 04:13:18.278324 | orchestrator | 2026-03-28 04:13:18.278335 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 04:13:18.278344 | orchestrator | + echo 2026-03-28 04:13:18.278352 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-28 04:13:18.278362 | orchestrator | + echo 2026-03-28 04:13:18.278367 | orchestrator | + osism container testbed-manager ps 2026-03-28 04:13:20.397072 | orchestrator | 2026-03-28 04:13:20 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-28 04:13:20.777581 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 04:13:20.777683 | orchestrator | 15e97a6b342b registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-28 04:13:20.777704 | orchestrator | 80823929a9e2 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-28 04:13:20.777722 | orchestrator | 2747fd59a744 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-28 04:13:20.777731 | orchestrator | 62ba9101f042 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-28 04:13:20.777740 | orchestrator | 8ac4e0d9dc83 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-28 04:13:20.777754 | orchestrator | 1003eb745fb1 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-03-28 04:13:20.777763 | orchestrator | 5c2ff3aceac5 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-28 04:13:20.777773 | orchestrator | e62030719841 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-28 04:13:20.777801 | orchestrator | 72cd8a99e59f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-28 04:13:20.777811 | orchestrator | 39dd46f8462b registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-28 04:13:20.777820 | orchestrator | 9cde333af545 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-28 04:13:20.777829 | orchestrator | 02d052118575 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-28 04:13:20.777839 | orchestrator | c1ceaa072a2d registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-28 04:13:20.777847 | orchestrator | 8f5a8bb5e543 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-28 04:13:20.777875 | orchestrator | d377310b4c06 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-28 04:13:20.777885 | orchestrator | 13ed8c5accfd registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-28 04:13:20.777894 | orchestrator | 19ccade5c5c3 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-28 04:13:20.777904 | orchestrator | 248002ebb3d9 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-28 04:13:20.777912 | orchestrator | e701ec9b7705 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-28 04:13:20.777921 | orchestrator | 45fb02b2e83a registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-28 04:13:20.777930 | orchestrator | e4016e785ff6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-28 04:13:20.777939 | orchestrator | 395fe62e595e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-28 04:13:20.777954 | orchestrator | c7297e739ba9 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-28 04:13:20.777963 | orchestrator | 899d2bf19f6c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-28 04:13:20.777972 | orchestrator | 83f40863c3fc registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-28 04:13:20.777981 | orchestrator | 017178ef1277 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-28 04:13:20.777990 | orchestrator | 7003ae35abdb registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-28 04:13:20.777999 | orchestrator | d3fbd49576d8 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-28 04:13:20.778011 | orchestrator | 68170bfa8899 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-28 04:13:20.778082 | orchestrator | 89509e4b92ce registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-28 04:13:21.139531 | orchestrator | 2026-03-28 04:13:21.139607 | orchestrator | ## Images @ testbed-manager 2026-03-28 04:13:21.139615 | orchestrator | 2026-03-28 04:13:21.139621 | orchestrator | + echo 2026-03-28 04:13:21.139627 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-28 04:13:21.139632 | orchestrator | + echo 2026-03-28 04:13:21.139640 | orchestrator | + osism container testbed-manager images 2026-03-28 04:13:23.540950 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 04:13:23.541024 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 4f363275599b 24 hours ago 239MB 2026-03-28 04:13:23.541033 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-28 04:13:23.541041 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-28 04:13:23.541047 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-28 04:13:23.541056 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 04:13:23.541062 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 04:13:23.541067 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 04:13:23.541071 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-28 04:13:23.541075 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 04:13:23.541096 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-28 04:13:23.541101 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-28 04:13:23.541105 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 04:13:23.541108 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-28 04:13:23.541112 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-28 04:13:23.541116 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-28 04:13:23.541121 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-28 04:13:23.541124 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-28 04:13:23.541128 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-28 04:13:23.541132 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-28 04:13:23.541136 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-28 04:13:23.541139 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-28 04:13:23.541143 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-28 04:13:23.541147 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-28 04:13:23.541151 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-28 04:13:23.541155 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-28 04:13:23.920423 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 04:13:23.920998 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 04:13:23.977726 | orchestrator | 2026-03-28 04:13:23.977798 | orchestrator | ## Containers @ testbed-node-0 2026-03-28 04:13:23.977807 | orchestrator | 2026-03-28 04:13:23.977811 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 04:13:23.977816 | orchestrator | + echo 2026-03-28 04:13:23.977820 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-28 04:13:23.977825 | orchestrator | + echo 2026-03-28 04:13:23.977829 | orchestrator | + osism container testbed-node-0 ps 2026-03-28 04:13:26.571145 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 04:13:26.571255 | orchestrator | 1640184396ec registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-28 04:13:26.571272 | orchestrator | f628be731ddf registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-28 04:13:26.571284 | orchestrator | fa697a23c8c9 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-28 04:13:26.571295 | orchestrator | 9c04cf992cb0 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-28 04:13:26.571328 | orchestrator | 4454692a05e5 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-28 04:13:26.571340 | orchestrator | aedf28104202 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-28 04:13:26.571357 | orchestrator | 0b14a29ecac5 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-28 04:13:26.571402 | orchestrator | 7c4d60a46811 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-28 04:13:26.571416 | orchestrator | 2ca6e8ba07ca registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_share 2026-03-28 04:13:26.571427 | orchestrator | 60375c2091b7 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-28 04:13:26.571438 | orchestrator | c2a78811650a registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-28 04:13:26.571449 | orchestrator | f729df24f50b registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-28 04:13:26.571460 | orchestrator | 2a83ba47ba8c registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-28 04:13:26.571471 | orchestrator | 4a08a553361a registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-28 04:13:26.571482 | orchestrator | 3eb46c7bb3c1 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-28 04:13:26.571492 | orchestrator | 5e0590d38c12 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-03-28 04:13:26.571509 | orchestrator | 3a4891d41a17 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-28 04:13:26.571520 | orchestrator | ea3f4a7e64b8 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-03-28 04:13:26.571531 | orchestrator | 76974f6470ad registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-28 04:13:26.571561 | orchestrator | 26a4dab80d8d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-28 04:13:26.571573 | orchestrator | 31d0e9b52bda registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-28 04:13:26.571592 | orchestrator | 66c9b50bce1c registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 22 minutes octavia_driver_agent 2026-03-28 04:13:26.571622 | orchestrator | e912df9dbba9 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-28 04:13:26.571648 | orchestrator | b7e544db59f5 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-28 04:13:26.571669 | orchestrator | a42ca1a73f51 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-28 04:13:26.571692 | orchestrator | 78fc8fee851d registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-28 04:13:26.571711 | orchestrator | ec7cb3dcc4ef registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-28 04:13:26.571729 | orchestrator | 82295793c3c2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-03-28 04:13:26.571745 | orchestrator | e43a1f22571e registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-28 04:13:26.571763 | orchestrator | 28491ce0e75f registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-03-28 04:13:26.571781 | orchestrator | 53592cddd56a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-28 04:13:26.571800 | orchestrator | 6140301cef63 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-28 04:13:26.571818 | orchestrator | ab595699097f registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-28 04:13:26.571834 | orchestrator | 547f82d56f83 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-28 04:13:26.571851 | orchestrator | 10a63dbc9cd0 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-28 04:13:26.571868 | orchestrator | bcea245c04e1 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-03-28 04:13:26.571886 | orchestrator | 91947a034bfc registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-03-28 04:13:26.571904 | orchestrator | bef228e8613b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-28 04:13:26.571930 | orchestrator | 42c3376337a1 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-03-28 04:13:26.571975 | orchestrator | aa84377efc05 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-28 04:13:26.571994 | orchestrator | 6ddfca29be4b registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-28 04:13:26.572012 | orchestrator | fa2c949fff86 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-28 04:13:26.572030 | orchestrator | 17a7aa4953fa registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-28 04:13:26.572048 | orchestrator | 6a37bc95319f registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-28 04:13:26.572067 | orchestrator | 205709862eb7 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-28 04:13:26.572086 | orchestrator | 5d8db817c0cd registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-28 04:13:26.572104 | orchestrator | 8eeef58862c2 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-28 04:13:26.572122 | orchestrator | ecce5c30ee15 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-28 04:13:26.572139 | orchestrator | 6828fcc9e1a1 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-03-28 04:13:26.572157 | orchestrator | ae526ab0c04e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-03-28 04:13:26.572175 | orchestrator | af8c99fffd14 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-28 04:13:26.572201 | orchestrator | a580dbf75b8e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-28 04:13:26.572221 | orchestrator | b9d3087c615a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-28 04:13:26.572240 | orchestrator | 1c77f6715569 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-28 04:13:26.572257 | orchestrator | 9b02b9649d75 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-28 04:13:26.572268 | orchestrator | 9cfed255a110 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-28 04:13:26.572279 | orchestrator | d0b56039f413 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-28 04:13:26.572299 | orchestrator | 82b64bdce522 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-28 04:13:26.572310 | orchestrator | 228a2e393b95 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-28 04:13:26.572330 | orchestrator | 46d3e2f57ce6 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-28 04:13:26.572341 | orchestrator | 6bab75257c14 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-28 04:13:26.572352 | orchestrator | 38222eba60ca registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-28 04:13:26.572363 | orchestrator | 2e1277eea523 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-28 04:13:26.572444 | orchestrator | f1589f8afeb3 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-28 04:13:26.572465 | orchestrator | 944d661adf1d registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-28 04:13:26.572478 | orchestrator | 326f4dbe36df registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-28 04:13:26.572489 | orchestrator | 69278eff2079 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-28 04:13:26.572500 | orchestrator | 8295dce05514 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-28 04:13:26.572512 | orchestrator | 02d4bcd08ef1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-28 04:13:26.572532 | orchestrator | a66d00e6e94d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-28 04:13:26.572550 | orchestrator | a8c405cd8c58 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-28 04:13:26.911062 | orchestrator | 2026-03-28 04:13:26.911174 | orchestrator | ## Images @ testbed-node-0 2026-03-28 04:13:26.911207 | orchestrator | 2026-03-28 04:13:26.911227 | orchestrator | + echo 2026-03-28 04:13:26.911245 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-28 04:13:26.911264 | orchestrator | + echo 2026-03-28 04:13:26.911283 | orchestrator | + osism container testbed-node-0 images 2026-03-28 04:13:29.505439 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 04:13:29.505530 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 04:13:29.505543 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 04:13:29.505552 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 04:13:29.505587 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 04:13:29.505601 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 04:13:29.505614 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 04:13:29.505626 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 04:13:29.505639 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 04:13:29.505651 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 04:13:29.505665 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 04:13:29.505678 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 04:13:29.505691 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 04:13:29.505704 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 04:13:29.505719 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 04:13:29.505727 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 04:13:29.505736 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 04:13:29.505744 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 04:13:29.505769 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 04:13:29.505783 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 04:13:29.505797 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 04:13:29.505809 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 04:13:29.505821 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 04:13:29.505832 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 04:13:29.505843 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 04:13:29.505862 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 04:13:29.505875 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 04:13:29.505887 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 04:13:29.505900 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-28 04:13:29.505914 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-28 04:13:29.505939 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 04:13:29.505952 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 04:13:29.505989 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-28 04:13:29.506004 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-28 04:13:29.506074 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-28 04:13:29.506093 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-28 04:13:29.506108 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-28 04:13:29.506123 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-28 04:13:29.506138 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-28 04:13:29.506153 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-28 04:13:29.506165 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 04:13:29.506175 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 04:13:29.506184 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 04:13:29.506194 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 04:13:29.506203 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 04:13:29.506217 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 04:13:29.506230 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 04:13:29.506252 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 04:13:29.506266 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 04:13:29.506279 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 04:13:29.506292 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 04:13:29.506305 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 04:13:29.506319 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 04:13:29.506329 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 04:13:29.506337 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 04:13:29.506345 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 04:13:29.506384 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 04:13:29.506394 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 04:13:29.506402 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 04:13:29.506409 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-28 04:13:29.506417 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-28 04:13:29.506425 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 04:13:29.506433 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 04:13:29.506441 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 04:13:29.506458 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 04:13:29.506466 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 04:13:29.506474 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 04:13:29.506482 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 04:13:29.506489 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 04:13:29.506497 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 04:13:29.841560 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 04:13:29.842678 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 04:13:29.904979 | orchestrator | 2026-03-28 04:13:29.905062 | orchestrator | ## Containers @ testbed-node-1 2026-03-28 04:13:29.905075 | orchestrator | 2026-03-28 04:13:29.905082 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 04:13:29.905089 | orchestrator | + echo 2026-03-28 04:13:29.905097 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-28 04:13:29.905104 | orchestrator | + echo 2026-03-28 04:13:29.905112 | orchestrator | + osism container testbed-node-1 ps 2026-03-28 04:13:32.482983 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 04:13:32.483074 | orchestrator | ba7ae2ddf145 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-28 04:13:32.483092 | orchestrator | 347c86d9d63c registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-28 04:13:32.483106 | orchestrator | d3d1f3c93bc5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-28 04:13:32.483119 | orchestrator | 3bbd748f4144 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-28 04:13:32.483157 | orchestrator | b0e850c40c6d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-28 04:13:32.483190 | orchestrator | b19ca4f30dc9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-28 04:13:32.483200 | orchestrator | 2526b3bdf116 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-28 04:13:32.483211 | orchestrator | 75da0c5a43b0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-28 04:13:32.483219 | orchestrator | 17e1acf10449 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-03-28 04:13:32.483226 | orchestrator | aaf555f27d7f registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-28 04:13:32.483233 | orchestrator | 86026eeae904 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-28 04:13:32.483240 | orchestrator | 8f353fc395d2 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-28 04:13:32.484328 | orchestrator | 13fc35a192f9 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-28 04:13:32.484400 | orchestrator | 22551838c064 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-28 04:13:32.484410 | orchestrator | f355f3c662d7 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-03-28 04:13:32.484417 | orchestrator | 98eba3fbbe22 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-03-28 04:13:32.484424 | orchestrator | 035eaa4b869f registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 20 minutes ceilometer_central 2026-03-28 04:13:32.484432 | orchestrator | 6b1a05124802 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-03-28 04:13:32.484439 | orchestrator | 5ede49848901 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-28 04:13:32.484446 | orchestrator | 236e7b07eadd registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-28 04:13:32.484453 | orchestrator | 93afdf34e0f5 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-28 04:13:32.484460 | orchestrator | 07fc4488b7a7 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-03-28 04:13:32.484468 | orchestrator | 9dc93b38ef97 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-28 04:13:32.484485 | orchestrator | f774ff688636 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-28 04:13:32.484493 | orchestrator | 8afea92866fb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-28 04:13:32.484500 | orchestrator | aabd96e6dca2 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-28 04:13:32.484507 | orchestrator | b3e13c213c77 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) designate_central 2026-03-28 04:13:32.484520 | orchestrator | 75c3918bb222 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-03-28 04:13:32.484527 | orchestrator | c7a6868fcc58 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-28 04:13:32.484537 | orchestrator | c016b6ae4790 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-03-28 04:13:32.484550 | orchestrator | e9fa4ed35570 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-28 04:13:32.484562 | orchestrator | da52af2ae2ab registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-28 04:13:32.484589 | orchestrator | 5b088681458a registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-03-28 04:13:32.484603 | orchestrator | abf6497117ca registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-28 04:13:32.484616 | orchestrator | eb9216d3c46e registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-28 04:13:32.484629 | orchestrator | e5f88abd9c4d registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-28 04:13:32.484641 | orchestrator | a480d55ee5d5 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-03-28 04:13:32.484651 | orchestrator | 6b5bf4c3e196 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-28 04:13:32.484658 | orchestrator | 0a28879008c8 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-03-28 04:13:32.484666 | orchestrator | e1387b2a4b4e registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-28 04:13:32.484673 | orchestrator | 0ca37c821bf0 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-28 04:13:32.484693 | orchestrator | 24320a9e5317 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-28 04:13:32.484700 | orchestrator | 0e808fcde88b registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-28 04:13:32.484707 | orchestrator | 8a347fcbd133 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-28 04:13:32.484714 | orchestrator | 42f8643a0065 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-28 04:13:32.484721 | orchestrator | b91be174c339 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-28 04:13:32.484728 | orchestrator | 2ad6116dc80a registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-28 04:13:32.484735 | orchestrator | 3a45bf6e0378 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-28 04:13:32.484742 | orchestrator | 3782ec712408 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-28 04:13:32.484749 | orchestrator | 5640c1760a24 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-03-28 04:13:32.484757 | orchestrator | 08a47d81d49d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-28 04:13:32.484764 | orchestrator | 63c01d28d51e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-28 04:13:32.484778 | orchestrator | 53b00e98f6ff registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-28 04:13:32.484786 | orchestrator | d56f0cc78458 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-28 04:13:32.484798 | orchestrator | 67c55053a716 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-28 04:13:32.484805 | orchestrator | 6e5ba8005abf registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-28 04:13:32.484812 | orchestrator | ae880ad5fff5 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-28 04:13:32.484822 | orchestrator | c427f47e36ac registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-28 04:13:32.484841 | orchestrator | 0f588228d749 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-28 04:13:32.484854 | orchestrator | 4e69d2d2151f registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-28 04:13:32.484865 | orchestrator | ed1ea41e10cc registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-28 04:13:32.484877 | orchestrator | 2673d5eb3033 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-28 04:13:32.484889 | orchestrator | ab9b43622dd1 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-28 04:13:32.484901 | orchestrator | ee43bf108d1d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-28 04:13:32.484912 | orchestrator | fddda41dd500 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-28 04:13:32.484923 | orchestrator | 3893c3c265d7 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-28 04:13:32.484935 | orchestrator | 7b42078de3ff registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-28 04:13:32.484948 | orchestrator | ba479f5614ab registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-28 04:13:32.484960 | orchestrator | 3a2b17c599d9 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-28 04:13:32.484973 | orchestrator | 7f2a88ecebc2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-28 04:13:32.484981 | orchestrator | 889ffe08e591 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-28 04:13:32.837617 | orchestrator | 2026-03-28 04:13:32.837721 | orchestrator | ## Images @ testbed-node-1 2026-03-28 04:13:32.837738 | orchestrator | 2026-03-28 04:13:32.837751 | orchestrator | + echo 2026-03-28 04:13:32.837765 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-28 04:13:32.837778 | orchestrator | + echo 2026-03-28 04:13:32.837790 | orchestrator | + osism container testbed-node-1 images 2026-03-28 04:13:35.572775 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 04:13:35.572865 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 04:13:35.572881 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 04:13:35.572895 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 04:13:35.572909 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 04:13:35.572922 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 04:13:35.572958 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 04:13:35.572980 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 04:13:35.572993 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 04:13:35.573004 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 04:13:35.573015 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 04:13:35.573024 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 04:13:35.573035 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 04:13:35.573046 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 04:13:35.573057 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 04:13:35.573069 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 04:13:35.573080 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 04:13:35.573089 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 04:13:35.573096 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 04:13:35.573102 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 04:13:35.573123 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 04:13:35.573130 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 04:13:35.573137 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 04:13:35.573143 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 04:13:35.573150 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 04:13:35.573157 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 04:13:35.573163 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 04:13:35.573174 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 04:13:35.573180 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-28 04:13:35.573187 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-28 04:13:35.573194 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 04:13:35.573201 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 04:13:35.573231 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-28 04:13:35.573238 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-28 04:13:35.573245 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-28 04:13:35.573251 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-28 04:13:35.573258 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-28 04:13:35.573264 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-28 04:13:35.573271 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-28 04:13:35.573277 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-28 04:13:35.573284 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 04:13:35.573290 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 04:13:35.573297 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 04:13:35.573303 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 04:13:35.573310 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 04:13:35.573316 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 04:13:35.573323 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 04:13:35.573329 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 04:13:35.573336 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 04:13:35.573369 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 04:13:35.573377 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 04:13:35.573383 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 04:13:35.573390 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 04:13:35.573397 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 04:13:35.573403 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 04:13:35.573410 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 04:13:35.573417 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 04:13:35.573423 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 04:13:35.573435 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 04:13:35.573441 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-28 04:13:35.573448 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-28 04:13:35.573455 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 04:13:35.573461 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 04:13:35.573468 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 04:13:35.573479 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 04:13:35.573486 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 04:13:35.573493 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 04:13:35.573499 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 04:13:35.573506 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 04:13:35.573512 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 04:13:35.935705 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 04:13:35.936812 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 04:13:35.996724 | orchestrator | 2026-03-28 04:13:35.996808 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 04:13:35.996820 | orchestrator | + echo 2026-03-28 04:13:35.996829 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-28 04:13:35.996838 | orchestrator | ## Containers @ testbed-node-2 2026-03-28 04:13:35.996847 | orchestrator | 2026-03-28 04:13:35.996855 | orchestrator | + echo 2026-03-28 04:13:35.996864 | orchestrator | + osism container testbed-node-2 ps 2026-03-28 04:13:38.535782 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 04:13:38.535921 | orchestrator | 9e963182cab0 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-28 04:13:38.535943 | orchestrator | 6d33b32436a8 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-28 04:13:38.535956 | orchestrator | 87522e245822 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-28 04:13:38.535969 | orchestrator | 4f468cc7ee08 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-28 04:13:38.535983 | orchestrator | 9c3c8d4dc877 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-28 04:13:38.535995 | orchestrator | f25d7a64bc27 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-28 04:13:38.536009 | orchestrator | 69a4b7a50e78 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-28 04:13:38.536042 | orchestrator | ac263f6b4a1d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-28 04:13:38.536053 | orchestrator | 243a81017aa0 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-03-28 04:13:38.536064 | orchestrator | c878691bb5e4 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-28 04:13:38.536075 | orchestrator | bedc427bf04a registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-28 04:13:38.536091 | orchestrator | f90eca5b75d1 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-28 04:13:38.536102 | orchestrator | 003e462903c2 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-28 04:13:38.536113 | orchestrator | 2444b7e1b133 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-28 04:13:38.536123 | orchestrator | c67aff3bc25c registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-03-28 04:13:38.536133 | orchestrator | 05fae5599175 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-03-28 04:13:38.536144 | orchestrator | e6539f7337d4 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-03-28 04:13:38.536155 | orchestrator | 97f828036d0b registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-03-28 04:13:38.536166 | orchestrator | 8024dc83c6ef registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-28 04:13:38.536198 | orchestrator | 9dffac2b7853 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-28 04:13:38.536210 | orchestrator | 7a29662640ce registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_health_manager 2026-03-28 04:13:38.536221 | orchestrator | 875a1da92558 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-03-28 04:13:38.536232 | orchestrator | b955c88369e6 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-03-28 04:13:38.536244 | orchestrator | f3e8f0d7ecd2 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-28 04:13:38.536263 | orchestrator | ca49433ecfa8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-28 04:13:38.536274 | orchestrator | 0c9902b1800d registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-28 04:13:38.536285 | orchestrator | 7d453ad959ef registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-03-28 04:13:38.536296 | orchestrator | 4ea7840bb76b registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-03-28 04:13:38.536306 | orchestrator | f00a1bc278d1 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-03-28 04:13:38.536316 | orchestrator | 552f1e89a431 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-03-28 04:13:38.536327 | orchestrator | b71d9aee9adf registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-28 04:13:38.536363 | orchestrator | 91365c5a7d8c registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-28 04:13:38.536375 | orchestrator | 5bd1e2c89b8d registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-03-28 04:13:38.536386 | orchestrator | 2079377f71da registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-28 04:13:38.536397 | orchestrator | 060a90306c69 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-28 04:13:38.536407 | orchestrator | e54824682f39 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-28 04:13:38.536419 | orchestrator | 89c71d8a4f07 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-03-28 04:13:38.536444 | orchestrator | cdade2f397fe registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-03-28 04:13:38.536460 | orchestrator | 6f3e64b69f4e registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-03-28 04:13:38.536471 | orchestrator | 3fc6710eed00 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-28 04:13:38.536482 | orchestrator | 1cb10b0d3160 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-28 04:13:38.536492 | orchestrator | 5619ac89f53f registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-03-28 04:13:38.536512 | orchestrator | 4d08578fba9e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-28 04:13:38.536523 | orchestrator | 454b9c429291 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-28 04:13:38.536532 | orchestrator | 667287c6a825 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-28 04:13:38.536543 | orchestrator | cedf66f7aa67 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-28 04:13:38.536554 | orchestrator | 2f9f2b27009e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-28 04:13:38.536565 | orchestrator | 39b65495b078 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-28 04:13:38.536576 | orchestrator | 5373949ef764 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-28 04:13:38.536588 | orchestrator | 1146979b5592 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-03-28 04:13:38.536599 | orchestrator | 9df3c89d093a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-28 04:13:38.536609 | orchestrator | 99ef085e2de2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-28 04:13:38.536624 | orchestrator | 92007be8c8f5 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-28 04:13:38.536635 | orchestrator | ed057b93ec14 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-28 04:13:38.536645 | orchestrator | 689aee6f8405 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-28 04:13:38.536655 | orchestrator | 565c6e12dfca registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-28 04:13:38.536665 | orchestrator | 3e6353a536da registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-28 04:13:38.536682 | orchestrator | 69a64afad637 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-28 04:13:38.536692 | orchestrator | f7b719e2e95f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-28 04:13:38.536702 | orchestrator | 80e75114ae54 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-28 04:13:38.536720 | orchestrator | 275101701ecd registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-28 04:13:38.536731 | orchestrator | 46ef84c5b1ca registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-28 04:13:38.536742 | orchestrator | fbcb7769a9c8 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-28 04:13:38.536752 | orchestrator | 419e392527f5 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-28 04:13:38.536762 | orchestrator | f81c8f184fca registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-28 04:13:38.536773 | orchestrator | c06ce2c98260 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-28 04:13:38.536780 | orchestrator | fb81f596ea5e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-28 04:13:38.536786 | orchestrator | 4011ca197877 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-28 04:13:38.536792 | orchestrator | fbd5e18fcb2d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-28 04:13:38.536799 | orchestrator | 66ef1fdadd72 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-28 04:13:38.536805 | orchestrator | 60b94d818595 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-28 04:13:38.946283 | orchestrator | 2026-03-28 04:13:38.946457 | orchestrator | ## Images @ testbed-node-2 2026-03-28 04:13:38.946476 | orchestrator | 2026-03-28 04:13:38.946488 | orchestrator | + echo 2026-03-28 04:13:38.946500 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-28 04:13:38.946513 | orchestrator | + echo 2026-03-28 04:13:38.946524 | orchestrator | + osism container testbed-node-2 images 2026-03-28 04:13:41.509397 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 04:13:41.509494 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-28 04:13:41.509504 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-28 04:13:41.509512 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-28 04:13:41.509519 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-28 04:13:41.509525 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-28 04:13:41.509531 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-28 04:13:41.509537 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-28 04:13:41.509565 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-28 04:13:41.509572 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-28 04:13:41.509578 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-28 04:13:41.509587 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-28 04:13:41.509593 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-28 04:13:41.509600 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-28 04:13:41.509606 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-28 04:13:41.509625 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-28 04:13:41.509631 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-28 04:13:41.509637 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-28 04:13:41.509643 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-28 04:13:41.509649 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-28 04:13:41.509655 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-28 04:13:41.509662 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-28 04:13:41.509668 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-28 04:13:41.509674 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-28 04:13:41.509680 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-28 04:13:41.509686 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-28 04:13:41.509692 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-28 04:13:41.509822 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-28 04:13:41.509832 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-28 04:13:41.509838 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-28 04:13:41.509844 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-28 04:13:41.509850 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-28 04:13:41.509856 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-28 04:13:41.509864 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-28 04:13:41.509888 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-28 04:13:41.509899 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-28 04:13:41.509910 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-28 04:13:41.509921 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-28 04:13:41.509930 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-28 04:13:41.509936 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-28 04:13:41.509942 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-28 04:13:41.509948 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-28 04:13:41.509954 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-28 04:13:41.509960 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-28 04:13:41.509966 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-28 04:13:41.509972 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-28 04:13:41.509978 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-28 04:13:41.509984 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-28 04:13:41.509990 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-28 04:13:41.509996 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-28 04:13:41.510002 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-28 04:13:41.510008 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-28 04:13:41.510014 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-28 04:13:41.510063 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-28 04:13:41.510069 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-28 04:13:41.510075 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-28 04:13:41.510081 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-28 04:13:41.510087 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-28 04:13:41.510093 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-28 04:13:41.510108 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-28 04:13:41.510120 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-28 04:13:41.510126 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-28 04:13:41.510132 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-28 04:13:41.510138 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-28 04:13:41.510144 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-28 04:13:41.510151 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-28 04:13:41.510157 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-28 04:13:41.510163 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-28 04:13:41.510169 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-28 04:13:41.510175 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-28 04:13:41.870810 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-28 04:13:41.882650 | orchestrator | + set -e 2026-03-28 04:13:41.883670 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 04:13:41.883711 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 04:13:41.883723 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 04:13:41.883734 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 04:13:41.883745 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 04:13:41.883760 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 04:13:41.883780 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 04:13:41.883798 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:13:41.883817 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:13:41.883837 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 04:13:41.883854 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 04:13:41.883866 | orchestrator | ++ export ARA=false 2026-03-28 04:13:41.883877 | orchestrator | ++ ARA=false 2026-03-28 04:13:41.883888 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 04:13:41.883899 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 04:13:41.883910 | orchestrator | ++ export TEMPEST=false 2026-03-28 04:13:41.883920 | orchestrator | ++ TEMPEST=false 2026-03-28 04:13:41.883931 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 04:13:41.883941 | orchestrator | ++ IS_ZUUL=true 2026-03-28 04:13:41.883952 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:13:41.883963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:13:41.883973 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 04:13:41.883984 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 04:13:41.883994 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 04:13:41.884005 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 04:13:41.884017 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 04:13:41.884027 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 04:13:41.884038 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 04:13:41.884049 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 04:13:41.884059 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 04:13:41.884070 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-28 04:13:41.894312 | orchestrator | + set -e 2026-03-28 04:13:41.894408 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:13:41.894421 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:13:41.894433 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:13:41.894444 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:13:41.894455 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:13:41.894466 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 04:13:41.896061 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:13:41.901714 | orchestrator | 2026-03-28 04:13:41.901795 | orchestrator | # Ceph status 2026-03-28 04:13:41.901813 | orchestrator | 2026-03-28 04:13:41.901827 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:13:41.901843 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:13:41.901859 | orchestrator | + echo 2026-03-28 04:13:41.901875 | orchestrator | + echo '# Ceph status' 2026-03-28 04:13:41.901890 | orchestrator | + echo 2026-03-28 04:13:41.901905 | orchestrator | + ceph -s 2026-03-28 04:13:42.536527 | orchestrator | cluster: 2026-03-28 04:13:42.536610 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-28 04:13:42.536621 | orchestrator | health: HEALTH_OK 2026-03-28 04:13:42.536630 | orchestrator | 2026-03-28 04:13:42.536637 | orchestrator | services: 2026-03-28 04:13:42.536644 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 71m) 2026-03-28 04:13:42.536653 | orchestrator | mgr: testbed-node-2(active, since 58m), standbys: testbed-node-1, testbed-node-0 2026-03-28 04:13:42.536661 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-28 04:13:42.536668 | orchestrator | osd: 6 osds: 6 up (since 67m), 6 in (since 68m) 2026-03-28 04:13:42.536675 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-28 04:13:42.536681 | orchestrator | 2026-03-28 04:13:42.536688 | orchestrator | data: 2026-03-28 04:13:42.536695 | orchestrator | volumes: 1/1 healthy 2026-03-28 04:13:42.536702 | orchestrator | pools: 14 pools, 401 pgs 2026-03-28 04:13:42.536708 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-28 04:13:42.536715 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-03-28 04:13:42.536722 | orchestrator | pgs: 401 active+clean 2026-03-28 04:13:42.536728 | orchestrator | 2026-03-28 04:13:42.584657 | orchestrator | 2026-03-28 04:13:42.584746 | orchestrator | # Ceph versions 2026-03-28 04:13:42.584760 | orchestrator | 2026-03-28 04:13:42.584771 | orchestrator | + echo 2026-03-28 04:13:42.584783 | orchestrator | + echo '# Ceph versions' 2026-03-28 04:13:42.584795 | orchestrator | + echo 2026-03-28 04:13:42.584806 | orchestrator | + ceph versions 2026-03-28 04:13:43.246012 | orchestrator | { 2026-03-28 04:13:43.246150 | orchestrator | "mon": { 2026-03-28 04:13:43.246159 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 04:13:43.246165 | orchestrator | }, 2026-03-28 04:13:43.246169 | orchestrator | "mgr": { 2026-03-28 04:13:43.246173 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 04:13:43.246177 | orchestrator | }, 2026-03-28 04:13:43.246181 | orchestrator | "osd": { 2026-03-28 04:13:43.246185 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-28 04:13:43.246189 | orchestrator | }, 2026-03-28 04:13:43.246192 | orchestrator | "mds": { 2026-03-28 04:13:43.246196 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 04:13:43.246200 | orchestrator | }, 2026-03-28 04:13:43.246204 | orchestrator | "rgw": { 2026-03-28 04:13:43.246207 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-28 04:13:43.246211 | orchestrator | }, 2026-03-28 04:13:43.246215 | orchestrator | "overall": { 2026-03-28 04:13:43.246250 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-28 04:13:43.246255 | orchestrator | } 2026-03-28 04:13:43.246258 | orchestrator | } 2026-03-28 04:13:43.286974 | orchestrator | 2026-03-28 04:13:43.287048 | orchestrator | # Ceph OSD tree 2026-03-28 04:13:43.287056 | orchestrator | 2026-03-28 04:13:43.287062 | orchestrator | + echo 2026-03-28 04:13:43.287068 | orchestrator | + echo '# Ceph OSD tree' 2026-03-28 04:13:43.287075 | orchestrator | + echo 2026-03-28 04:13:43.287080 | orchestrator | + ceph osd df tree 2026-03-28 04:13:43.827806 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-28 04:13:43.827924 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 369 MiB 113 GiB 5.87 1.00 - root default 2026-03-28 04:13:43.827942 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-03-28 04:13:43.827954 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 984 MiB 923 MiB 1 KiB 62 MiB 19 GiB 4.81 0.82 174 up osd.0 2026-03-28 04:13:43.827967 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.93 1.18 218 up osd.3 2026-03-28 04:13:43.828010 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-03-28 04:13:43.828040 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.84 1.17 204 up osd.1 2026-03-28 04:13:43.828053 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1000 MiB 939 MiB 1 KiB 62 MiB 19 GiB 4.89 0.83 186 up osd.4 2026-03-28 04:13:43.828067 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-03-28 04:13:43.828081 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.39 1.26 191 up osd.2 2026-03-28 04:13:43.828096 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 889 MiB 827 MiB 1 KiB 62 MiB 19 GiB 4.34 0.74 197 up osd.5 2026-03-28 04:13:43.828109 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 369 MiB 113 GiB 5.87 2026-03-28 04:13:43.828123 | orchestrator | MIN/MAX VAR: 0.74/1.26 STDDEV: 1.21 2026-03-28 04:13:43.874241 | orchestrator | 2026-03-28 04:13:43.874418 | orchestrator | # Ceph monitor status 2026-03-28 04:13:43.874436 | orchestrator | 2026-03-28 04:13:43.874448 | orchestrator | + echo 2026-03-28 04:13:43.874459 | orchestrator | + echo '# Ceph monitor status' 2026-03-28 04:13:43.874471 | orchestrator | + echo 2026-03-28 04:13:43.874482 | orchestrator | + ceph mon stat 2026-03-28 04:13:44.522076 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-28 04:13:44.571370 | orchestrator | 2026-03-28 04:13:44.571445 | orchestrator | # Ceph quorum status 2026-03-28 04:13:44.571453 | orchestrator | 2026-03-28 04:13:44.571459 | orchestrator | + echo 2026-03-28 04:13:44.571464 | orchestrator | + echo '# Ceph quorum status' 2026-03-28 04:13:44.571470 | orchestrator | + echo 2026-03-28 04:13:44.572464 | orchestrator | + ceph quorum_status 2026-03-28 04:13:44.572481 | orchestrator | + jq 2026-03-28 04:13:45.229431 | orchestrator | { 2026-03-28 04:13:45.229520 | orchestrator | "election_epoch": 4, 2026-03-28 04:13:45.229532 | orchestrator | "quorum": [ 2026-03-28 04:13:45.229541 | orchestrator | 0, 2026-03-28 04:13:45.229550 | orchestrator | 1, 2026-03-28 04:13:45.229559 | orchestrator | 2 2026-03-28 04:13:45.229567 | orchestrator | ], 2026-03-28 04:13:45.229575 | orchestrator | "quorum_names": [ 2026-03-28 04:13:45.229586 | orchestrator | "testbed-node-0", 2026-03-28 04:13:45.229595 | orchestrator | "testbed-node-1", 2026-03-28 04:13:45.229604 | orchestrator | "testbed-node-2" 2026-03-28 04:13:45.229613 | orchestrator | ], 2026-03-28 04:13:45.229622 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-28 04:13:45.229632 | orchestrator | "quorum_age": 4295, 2026-03-28 04:13:45.229640 | orchestrator | "features": { 2026-03-28 04:13:45.229649 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-28 04:13:45.229658 | orchestrator | "quorum_mon": [ 2026-03-28 04:13:45.229667 | orchestrator | "kraken", 2026-03-28 04:13:45.229675 | orchestrator | "luminous", 2026-03-28 04:13:45.229684 | orchestrator | "mimic", 2026-03-28 04:13:45.229693 | orchestrator | "osdmap-prune", 2026-03-28 04:13:45.229702 | orchestrator | "nautilus", 2026-03-28 04:13:45.229711 | orchestrator | "octopus", 2026-03-28 04:13:45.229719 | orchestrator | "pacific", 2026-03-28 04:13:45.229727 | orchestrator | "elector-pinging", 2026-03-28 04:13:45.229736 | orchestrator | "quincy", 2026-03-28 04:13:45.229744 | orchestrator | "reef" 2026-03-28 04:13:45.229754 | orchestrator | ] 2026-03-28 04:13:45.229763 | orchestrator | }, 2026-03-28 04:13:45.229771 | orchestrator | "monmap": { 2026-03-28 04:13:45.229779 | orchestrator | "epoch": 1, 2026-03-28 04:13:45.229787 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-28 04:13:45.229797 | orchestrator | "modified": "2026-03-28T03:01:57.131380Z", 2026-03-28 04:13:45.229806 | orchestrator | "created": "2026-03-28T03:01:57.131380Z", 2026-03-28 04:13:45.229815 | orchestrator | "min_mon_release": 18, 2026-03-28 04:13:45.229823 | orchestrator | "min_mon_release_name": "reef", 2026-03-28 04:13:45.229833 | orchestrator | "election_strategy": 1, 2026-03-28 04:13:45.229841 | orchestrator | "disallowed_leaders: ": "", 2026-03-28 04:13:45.229875 | orchestrator | "stretch_mode": false, 2026-03-28 04:13:45.229884 | orchestrator | "tiebreaker_mon": "", 2026-03-28 04:13:45.229893 | orchestrator | "removed_ranks: ": "", 2026-03-28 04:13:45.229901 | orchestrator | "features": { 2026-03-28 04:13:45.229910 | orchestrator | "persistent": [ 2026-03-28 04:13:45.229918 | orchestrator | "kraken", 2026-03-28 04:13:45.229926 | orchestrator | "luminous", 2026-03-28 04:13:45.229936 | orchestrator | "mimic", 2026-03-28 04:13:45.229944 | orchestrator | "osdmap-prune", 2026-03-28 04:13:45.229953 | orchestrator | "nautilus", 2026-03-28 04:13:45.229961 | orchestrator | "octopus", 2026-03-28 04:13:45.229971 | orchestrator | "pacific", 2026-03-28 04:13:45.229979 | orchestrator | "elector-pinging", 2026-03-28 04:13:45.230009 | orchestrator | "quincy", 2026-03-28 04:13:45.230071 | orchestrator | "reef" 2026-03-28 04:13:45.230081 | orchestrator | ], 2026-03-28 04:13:45.230090 | orchestrator | "optional": [] 2026-03-28 04:13:45.230099 | orchestrator | }, 2026-03-28 04:13:45.230109 | orchestrator | "mons": [ 2026-03-28 04:13:45.230118 | orchestrator | { 2026-03-28 04:13:45.230127 | orchestrator | "rank": 0, 2026-03-28 04:13:45.230136 | orchestrator | "name": "testbed-node-0", 2026-03-28 04:13:45.230145 | orchestrator | "public_addrs": { 2026-03-28 04:13:45.230153 | orchestrator | "addrvec": [ 2026-03-28 04:13:45.230162 | orchestrator | { 2026-03-28 04:13:45.230171 | orchestrator | "type": "v2", 2026-03-28 04:13:45.230181 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-28 04:13:45.230190 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230198 | orchestrator | }, 2026-03-28 04:13:45.230207 | orchestrator | { 2026-03-28 04:13:45.230216 | orchestrator | "type": "v1", 2026-03-28 04:13:45.230225 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-28 04:13:45.230234 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230243 | orchestrator | } 2026-03-28 04:13:45.230252 | orchestrator | ] 2026-03-28 04:13:45.230260 | orchestrator | }, 2026-03-28 04:13:45.230270 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-28 04:13:45.230279 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-28 04:13:45.230289 | orchestrator | "priority": 0, 2026-03-28 04:13:45.230297 | orchestrator | "weight": 0, 2026-03-28 04:13:45.230307 | orchestrator | "crush_location": "{}" 2026-03-28 04:13:45.230338 | orchestrator | }, 2026-03-28 04:13:45.230347 | orchestrator | { 2026-03-28 04:13:45.230356 | orchestrator | "rank": 1, 2026-03-28 04:13:45.230364 | orchestrator | "name": "testbed-node-1", 2026-03-28 04:13:45.230372 | orchestrator | "public_addrs": { 2026-03-28 04:13:45.230379 | orchestrator | "addrvec": [ 2026-03-28 04:13:45.230387 | orchestrator | { 2026-03-28 04:13:45.230394 | orchestrator | "type": "v2", 2026-03-28 04:13:45.230402 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-28 04:13:45.230410 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230418 | orchestrator | }, 2026-03-28 04:13:45.230426 | orchestrator | { 2026-03-28 04:13:45.230434 | orchestrator | "type": "v1", 2026-03-28 04:13:45.230442 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-28 04:13:45.230449 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230457 | orchestrator | } 2026-03-28 04:13:45.230465 | orchestrator | ] 2026-03-28 04:13:45.230473 | orchestrator | }, 2026-03-28 04:13:45.230482 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-28 04:13:45.230490 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-28 04:13:45.230498 | orchestrator | "priority": 0, 2026-03-28 04:13:45.230506 | orchestrator | "weight": 0, 2026-03-28 04:13:45.230514 | orchestrator | "crush_location": "{}" 2026-03-28 04:13:45.230522 | orchestrator | }, 2026-03-28 04:13:45.230530 | orchestrator | { 2026-03-28 04:13:45.230538 | orchestrator | "rank": 2, 2026-03-28 04:13:45.230546 | orchestrator | "name": "testbed-node-2", 2026-03-28 04:13:45.230554 | orchestrator | "public_addrs": { 2026-03-28 04:13:45.230562 | orchestrator | "addrvec": [ 2026-03-28 04:13:45.230571 | orchestrator | { 2026-03-28 04:13:45.230579 | orchestrator | "type": "v2", 2026-03-28 04:13:45.230587 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-28 04:13:45.230596 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230604 | orchestrator | }, 2026-03-28 04:13:45.230612 | orchestrator | { 2026-03-28 04:13:45.230620 | orchestrator | "type": "v1", 2026-03-28 04:13:45.230629 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-28 04:13:45.230637 | orchestrator | "nonce": 0 2026-03-28 04:13:45.230660 | orchestrator | } 2026-03-28 04:13:45.230669 | orchestrator | ] 2026-03-28 04:13:45.230678 | orchestrator | }, 2026-03-28 04:13:45.230685 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-28 04:13:45.230694 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-28 04:13:45.230702 | orchestrator | "priority": 0, 2026-03-28 04:13:45.230724 | orchestrator | "weight": 0, 2026-03-28 04:13:45.230732 | orchestrator | "crush_location": "{}" 2026-03-28 04:13:45.230740 | orchestrator | } 2026-03-28 04:13:45.230749 | orchestrator | ] 2026-03-28 04:13:45.230757 | orchestrator | } 2026-03-28 04:13:45.230765 | orchestrator | } 2026-03-28 04:13:45.230773 | orchestrator | 2026-03-28 04:13:45.230781 | orchestrator | # Ceph free space status 2026-03-28 04:13:45.230789 | orchestrator | 2026-03-28 04:13:45.230798 | orchestrator | + echo 2026-03-28 04:13:45.230806 | orchestrator | + echo '# Ceph free space status' 2026-03-28 04:13:45.230816 | orchestrator | + echo 2026-03-28 04:13:45.230824 | orchestrator | + ceph df 2026-03-28 04:13:45.934137 | orchestrator | --- RAW STORAGE --- 2026-03-28 04:13:45.934262 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-28 04:13:45.934303 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-28 04:13:45.934416 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-28 04:13:45.934434 | orchestrator | 2026-03-28 04:13:45.934450 | orchestrator | --- POOLS --- 2026-03-28 04:13:45.934467 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-28 04:13:45.934484 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-28 04:13:45.934500 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-28 04:13:45.934515 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-28 04:13:45.934530 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-28 04:13:45.934545 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-28 04:13:45.934562 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-28 04:13:45.934579 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-28 04:13:45.934594 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-28 04:13:45.934611 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-03-28 04:13:45.934629 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 04:13:45.934646 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 04:13:45.934663 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2026-03-28 04:13:45.934680 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 04:13:45.934698 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 04:13:45.997035 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-28 04:13:46.054687 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-28 04:13:46.054779 | orchestrator | + osism apply facts 2026-03-28 04:13:48.265876 | orchestrator | 2026-03-28 04:13:48 | INFO  | Task 374b08ba-7754-4d53-a752-c36cca04d22e (facts) was prepared for execution. 2026-03-28 04:13:48.265989 | orchestrator | 2026-03-28 04:13:48 | INFO  | It takes a moment until task 374b08ba-7754-4d53-a752-c36cca04d22e (facts) has been started and output is visible here. 2026-03-28 04:14:02.443217 | orchestrator | 2026-03-28 04:14:02.443335 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 04:14:02.443348 | orchestrator | 2026-03-28 04:14:02.443356 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 04:14:02.443363 | orchestrator | Saturday 28 March 2026 04:13:52 +0000 (0:00:00.308) 0:00:00.308 ******** 2026-03-28 04:14:02.443370 | orchestrator | ok: [testbed-manager] 2026-03-28 04:14:02.443378 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:02.443385 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:02.443392 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:02.443399 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:14:02.443422 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:14:02.443426 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:14:02.443430 | orchestrator | 2026-03-28 04:14:02.443434 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 04:14:02.443438 | orchestrator | Saturday 28 March 2026 04:13:54 +0000 (0:00:01.256) 0:00:01.565 ******** 2026-03-28 04:14:02.443442 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:14:02.443446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:02.443450 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:14:02.443454 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:14:02.443458 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:14:02.443462 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:14:02.443466 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:14:02.443469 | orchestrator | 2026-03-28 04:14:02.443473 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 04:14:02.443477 | orchestrator | 2026-03-28 04:14:02.443481 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 04:14:02.443485 | orchestrator | Saturday 28 March 2026 04:13:55 +0000 (0:00:01.404) 0:00:02.969 ******** 2026-03-28 04:14:02.443488 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:02.443492 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:02.443496 | orchestrator | ok: [testbed-manager] 2026-03-28 04:14:02.443500 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:02.443503 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:14:02.443507 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:14:02.443511 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:14:02.443514 | orchestrator | 2026-03-28 04:14:02.443518 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 04:14:02.443522 | orchestrator | 2026-03-28 04:14:02.443526 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 04:14:02.443530 | orchestrator | Saturday 28 March 2026 04:14:01 +0000 (0:00:05.648) 0:00:08.618 ******** 2026-03-28 04:14:02.443534 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:14:02.443537 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:02.443541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:14:02.443545 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:14:02.443549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:14:02.443552 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:14:02.443556 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:14:02.443560 | orchestrator | 2026-03-28 04:14:02.443563 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:14:02.443568 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443573 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443577 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443581 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443585 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443588 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443592 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:02.443596 | orchestrator | 2026-03-28 04:14:02.443600 | orchestrator | 2026-03-28 04:14:02.443604 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:14:02.443607 | orchestrator | Saturday 28 March 2026 04:14:01 +0000 (0:00:00.619) 0:00:09.237 ******** 2026-03-28 04:14:02.443615 | orchestrator | =============================================================================== 2026-03-28 04:14:02.443619 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2026-03-28 04:14:02.443622 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2026-03-28 04:14:02.443626 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2026-03-28 04:14:02.443630 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-03-28 04:14:02.808961 | orchestrator | + osism validate ceph-mons 2026-03-28 04:14:36.783714 | orchestrator | 2026-03-28 04:14:36.783785 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-28 04:14:36.783791 | orchestrator | 2026-03-28 04:14:36.783795 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 04:14:36.783813 | orchestrator | Saturday 28 March 2026 04:14:19 +0000 (0:00:00.455) 0:00:00.455 ******** 2026-03-28 04:14:36.783819 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.783823 | orchestrator | 2026-03-28 04:14:36.783827 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 04:14:36.783831 | orchestrator | Saturday 28 March 2026 04:14:20 +0000 (0:00:00.902) 0:00:01.357 ******** 2026-03-28 04:14:36.783836 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.783840 | orchestrator | 2026-03-28 04:14:36.783844 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 04:14:36.783848 | orchestrator | Saturday 28 March 2026 04:14:21 +0000 (0:00:01.032) 0:00:02.390 ******** 2026-03-28 04:14:36.783852 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.783857 | orchestrator | 2026-03-28 04:14:36.783861 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 04:14:36.783865 | orchestrator | Saturday 28 March 2026 04:14:22 +0000 (0:00:00.147) 0:00:02.537 ******** 2026-03-28 04:14:36.783868 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.783872 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:36.783876 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:36.783880 | orchestrator | 2026-03-28 04:14:36.783884 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 04:14:36.783888 | orchestrator | Saturday 28 March 2026 04:14:22 +0000 (0:00:00.313) 0:00:02.851 ******** 2026-03-28 04:14:36.783892 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.783895 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:36.783899 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:36.783903 | orchestrator | 2026-03-28 04:14:36.783907 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 04:14:36.783911 | orchestrator | Saturday 28 March 2026 04:14:23 +0000 (0:00:01.131) 0:00:03.982 ******** 2026-03-28 04:14:36.783915 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.783919 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:14:36.783923 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:14:36.783926 | orchestrator | 2026-03-28 04:14:36.783930 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 04:14:36.783934 | orchestrator | Saturday 28 March 2026 04:14:23 +0000 (0:00:00.313) 0:00:04.296 ******** 2026-03-28 04:14:36.783938 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.783942 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:36.783946 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:36.783949 | orchestrator | 2026-03-28 04:14:36.783953 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:14:36.783957 | orchestrator | Saturday 28 March 2026 04:14:24 +0000 (0:00:00.532) 0:00:04.829 ******** 2026-03-28 04:14:36.783961 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.783964 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:36.783968 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:36.783972 | orchestrator | 2026-03-28 04:14:36.783976 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-28 04:14:36.783992 | orchestrator | Saturday 28 March 2026 04:14:24 +0000 (0:00:00.329) 0:00:05.158 ******** 2026-03-28 04:14:36.783996 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784000 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:14:36.784004 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:14:36.784008 | orchestrator | 2026-03-28 04:14:36.784012 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-28 04:14:36.784015 | orchestrator | Saturday 28 March 2026 04:14:24 +0000 (0:00:00.302) 0:00:05.461 ******** 2026-03-28 04:14:36.784019 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784023 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:14:36.784027 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:14:36.784031 | orchestrator | 2026-03-28 04:14:36.784037 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:14:36.784041 | orchestrator | Saturday 28 March 2026 04:14:25 +0000 (0:00:00.537) 0:00:05.999 ******** 2026-03-28 04:14:36.784045 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784049 | orchestrator | 2026-03-28 04:14:36.784053 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:14:36.784056 | orchestrator | Saturday 28 March 2026 04:14:25 +0000 (0:00:00.258) 0:00:06.258 ******** 2026-03-28 04:14:36.784060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784064 | orchestrator | 2026-03-28 04:14:36.784068 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:14:36.784072 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.277) 0:00:06.536 ******** 2026-03-28 04:14:36.784075 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784079 | orchestrator | 2026-03-28 04:14:36.784083 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:36.784087 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.299) 0:00:06.835 ******** 2026-03-28 04:14:36.784090 | orchestrator | 2026-03-28 04:14:36.784094 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:36.784098 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.083) 0:00:06.919 ******** 2026-03-28 04:14:36.784102 | orchestrator | 2026-03-28 04:14:36.784105 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:36.784109 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.077) 0:00:06.997 ******** 2026-03-28 04:14:36.784113 | orchestrator | 2026-03-28 04:14:36.784117 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:14:36.784121 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.084) 0:00:07.082 ******** 2026-03-28 04:14:36.784124 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784128 | orchestrator | 2026-03-28 04:14:36.784132 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 04:14:36.784136 | orchestrator | Saturday 28 March 2026 04:14:26 +0000 (0:00:00.266) 0:00:07.348 ******** 2026-03-28 04:14:36.784140 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784143 | orchestrator | 2026-03-28 04:14:36.784206 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-28 04:14:36.784212 | orchestrator | Saturday 28 March 2026 04:14:27 +0000 (0:00:00.266) 0:00:07.615 ******** 2026-03-28 04:14:36.784216 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784220 | orchestrator | 2026-03-28 04:14:36.784224 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-28 04:14:36.784228 | orchestrator | Saturday 28 March 2026 04:14:27 +0000 (0:00:00.113) 0:00:07.728 ******** 2026-03-28 04:14:36.784232 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:14:36.784236 | orchestrator | 2026-03-28 04:14:36.784242 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-28 04:14:36.784246 | orchestrator | Saturday 28 March 2026 04:14:28 +0000 (0:00:01.631) 0:00:09.360 ******** 2026-03-28 04:14:36.784250 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784253 | orchestrator | 2026-03-28 04:14:36.784262 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-28 04:14:36.784266 | orchestrator | Saturday 28 March 2026 04:14:29 +0000 (0:00:00.588) 0:00:09.949 ******** 2026-03-28 04:14:36.784269 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784273 | orchestrator | 2026-03-28 04:14:36.784277 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-28 04:14:36.784281 | orchestrator | Saturday 28 March 2026 04:14:29 +0000 (0:00:00.138) 0:00:10.087 ******** 2026-03-28 04:14:36.784286 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784290 | orchestrator | 2026-03-28 04:14:36.784294 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-28 04:14:36.784299 | orchestrator | Saturday 28 March 2026 04:14:30 +0000 (0:00:00.430) 0:00:10.518 ******** 2026-03-28 04:14:36.784303 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784308 | orchestrator | 2026-03-28 04:14:36.784313 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-28 04:14:36.784320 | orchestrator | Saturday 28 March 2026 04:14:30 +0000 (0:00:00.384) 0:00:10.903 ******** 2026-03-28 04:14:36.784325 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784331 | orchestrator | 2026-03-28 04:14:36.784340 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-28 04:14:36.784349 | orchestrator | Saturday 28 March 2026 04:14:30 +0000 (0:00:00.141) 0:00:11.044 ******** 2026-03-28 04:14:36.784354 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784360 | orchestrator | 2026-03-28 04:14:36.784375 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-28 04:14:36.784381 | orchestrator | Saturday 28 March 2026 04:14:30 +0000 (0:00:00.224) 0:00:11.269 ******** 2026-03-28 04:14:36.784387 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784393 | orchestrator | 2026-03-28 04:14:36.784399 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-28 04:14:36.784406 | orchestrator | Saturday 28 March 2026 04:14:30 +0000 (0:00:00.134) 0:00:11.403 ******** 2026-03-28 04:14:36.784412 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:14:36.784418 | orchestrator | 2026-03-28 04:14:36.784424 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-28 04:14:36.784430 | orchestrator | Saturday 28 March 2026 04:14:32 +0000 (0:00:01.333) 0:00:12.736 ******** 2026-03-28 04:14:36.784437 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784443 | orchestrator | 2026-03-28 04:14:36.784450 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-28 04:14:36.784456 | orchestrator | Saturday 28 March 2026 04:14:32 +0000 (0:00:00.318) 0:00:13.054 ******** 2026-03-28 04:14:36.784463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784469 | orchestrator | 2026-03-28 04:14:36.784474 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-28 04:14:36.784478 | orchestrator | Saturday 28 March 2026 04:14:32 +0000 (0:00:00.146) 0:00:13.201 ******** 2026-03-28 04:14:36.784483 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:14:36.784487 | orchestrator | 2026-03-28 04:14:36.784491 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-28 04:14:36.784499 | orchestrator | Saturday 28 March 2026 04:14:32 +0000 (0:00:00.192) 0:00:13.393 ******** 2026-03-28 04:14:36.784504 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784508 | orchestrator | 2026-03-28 04:14:36.784512 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-28 04:14:36.784516 | orchestrator | Saturday 28 March 2026 04:14:33 +0000 (0:00:00.154) 0:00:13.548 ******** 2026-03-28 04:14:36.784521 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784525 | orchestrator | 2026-03-28 04:14:36.784529 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 04:14:36.784534 | orchestrator | Saturday 28 March 2026 04:14:33 +0000 (0:00:00.360) 0:00:13.908 ******** 2026-03-28 04:14:36.784538 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.784548 | orchestrator | 2026-03-28 04:14:36.784552 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 04:14:36.784556 | orchestrator | Saturday 28 March 2026 04:14:33 +0000 (0:00:00.285) 0:00:14.194 ******** 2026-03-28 04:14:36.784561 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:14:36.784565 | orchestrator | 2026-03-28 04:14:36.784569 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:14:36.784574 | orchestrator | Saturday 28 March 2026 04:14:33 +0000 (0:00:00.274) 0:00:14.468 ******** 2026-03-28 04:14:36.784578 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.784582 | orchestrator | 2026-03-28 04:14:36.784587 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:14:36.784591 | orchestrator | Saturday 28 March 2026 04:14:35 +0000 (0:00:01.946) 0:00:16.414 ******** 2026-03-28 04:14:36.784595 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.784599 | orchestrator | 2026-03-28 04:14:36.784604 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:14:36.784608 | orchestrator | Saturday 28 March 2026 04:14:36 +0000 (0:00:00.307) 0:00:16.722 ******** 2026-03-28 04:14:36.784612 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:36.784616 | orchestrator | 2026-03-28 04:14:36.784625 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:39.705537 | orchestrator | Saturday 28 March 2026 04:14:36 +0000 (0:00:00.284) 0:00:17.006 ******** 2026-03-28 04:14:39.705625 | orchestrator | 2026-03-28 04:14:39.705638 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:39.705648 | orchestrator | Saturday 28 March 2026 04:14:36 +0000 (0:00:00.080) 0:00:17.087 ******** 2026-03-28 04:14:39.705657 | orchestrator | 2026-03-28 04:14:39.705665 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:14:39.705675 | orchestrator | Saturday 28 March 2026 04:14:36 +0000 (0:00:00.079) 0:00:17.166 ******** 2026-03-28 04:14:39.705684 | orchestrator | 2026-03-28 04:14:39.705692 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 04:14:39.705701 | orchestrator | Saturday 28 March 2026 04:14:36 +0000 (0:00:00.076) 0:00:17.242 ******** 2026-03-28 04:14:39.705710 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:14:39.705719 | orchestrator | 2026-03-28 04:14:39.705728 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:14:39.705737 | orchestrator | Saturday 28 March 2026 04:14:38 +0000 (0:00:01.648) 0:00:18.891 ******** 2026-03-28 04:14:39.705745 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 04:14:39.705754 | orchestrator |  "msg": [ 2026-03-28 04:14:39.705764 | orchestrator |  "Validator run completed.", 2026-03-28 04:14:39.705774 | orchestrator |  "You can find the report file here:", 2026-03-28 04:14:39.705783 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-28T04:14:20+00:00-report.json", 2026-03-28 04:14:39.705793 | orchestrator |  "on the following host:", 2026-03-28 04:14:39.705809 | orchestrator |  "testbed-manager" 2026-03-28 04:14:39.705824 | orchestrator |  ] 2026-03-28 04:14:39.705840 | orchestrator | } 2026-03-28 04:14:39.705855 | orchestrator | 2026-03-28 04:14:39.705871 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:14:39.705888 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 04:14:39.705904 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:39.705921 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:14:39.705935 | orchestrator | 2026-03-28 04:14:39.705985 | orchestrator | 2026-03-28 04:14:39.706004 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:14:39.706083 | orchestrator | Saturday 28 March 2026 04:14:39 +0000 (0:00:00.917) 0:00:19.808 ******** 2026-03-28 04:14:39.706093 | orchestrator | =============================================================================== 2026-03-28 04:14:39.706102 | orchestrator | Aggregate test results step one ----------------------------------------- 1.95s 2026-03-28 04:14:39.706111 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2026-03-28 04:14:39.706119 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.63s 2026-03-28 04:14:39.706128 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2026-03-28 04:14:39.706137 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2026-03-28 04:14:39.706145 | orchestrator | Create report output directory ------------------------------------------ 1.03s 2026-03-28 04:14:39.706179 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-03-28 04:14:39.706188 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-03-28 04:14:39.706197 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2026-03-28 04:14:39.706205 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2026-03-28 04:14:39.706214 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2026-03-28 04:14:39.706223 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.43s 2026-03-28 04:14:39.706231 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.38s 2026-03-28 04:14:39.706241 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2026-03-28 04:14:39.706254 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-03-28 04:14:39.706268 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-03-28 04:14:39.706282 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-28 04:14:39.706300 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-03-28 04:14:39.706320 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-03-28 04:14:39.706333 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-03-28 04:14:40.059371 | orchestrator | + osism validate ceph-mgrs 2026-03-28 04:15:12.457091 | orchestrator | 2026-03-28 04:15:12.457224 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-28 04:15:12.457233 | orchestrator | 2026-03-28 04:15:12.457237 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 04:15:12.457243 | orchestrator | Saturday 28 March 2026 04:14:57 +0000 (0:00:00.453) 0:00:00.453 ******** 2026-03-28 04:15:12.457248 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457253 | orchestrator | 2026-03-28 04:15:12.457257 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 04:15:12.457261 | orchestrator | Saturday 28 March 2026 04:14:58 +0000 (0:00:00.864) 0:00:01.318 ******** 2026-03-28 04:15:12.457284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457288 | orchestrator | 2026-03-28 04:15:12.457292 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 04:15:12.457297 | orchestrator | Saturday 28 March 2026 04:14:59 +0000 (0:00:01.021) 0:00:02.340 ******** 2026-03-28 04:15:12.457301 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457306 | orchestrator | 2026-03-28 04:15:12.457310 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 04:15:12.457314 | orchestrator | Saturday 28 March 2026 04:14:59 +0000 (0:00:00.125) 0:00:02.465 ******** 2026-03-28 04:15:12.457318 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457322 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:15:12.457344 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:15:12.457348 | orchestrator | 2026-03-28 04:15:12.457352 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 04:15:12.457355 | orchestrator | Saturday 28 March 2026 04:14:59 +0000 (0:00:00.355) 0:00:02.820 ******** 2026-03-28 04:15:12.457359 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457363 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:15:12.457366 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:15:12.457370 | orchestrator | 2026-03-28 04:15:12.457374 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 04:15:12.457378 | orchestrator | Saturday 28 March 2026 04:15:00 +0000 (0:00:01.080) 0:00:03.901 ******** 2026-03-28 04:15:12.457382 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:15:12.457389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:15:12.457393 | orchestrator | 2026-03-28 04:15:12.457397 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 04:15:12.457401 | orchestrator | Saturday 28 March 2026 04:15:00 +0000 (0:00:00.330) 0:00:04.231 ******** 2026-03-28 04:15:12.457404 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457409 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:15:12.457413 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:15:12.457417 | orchestrator | 2026-03-28 04:15:12.457420 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:12.457424 | orchestrator | Saturday 28 March 2026 04:15:01 +0000 (0:00:00.557) 0:00:04.789 ******** 2026-03-28 04:15:12.457428 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457432 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:15:12.457435 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:15:12.457439 | orchestrator | 2026-03-28 04:15:12.457443 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-28 04:15:12.457447 | orchestrator | Saturday 28 March 2026 04:15:01 +0000 (0:00:00.347) 0:00:05.137 ******** 2026-03-28 04:15:12.457450 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457454 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:15:12.457458 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:15:12.457462 | orchestrator | 2026-03-28 04:15:12.457465 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-28 04:15:12.457469 | orchestrator | Saturday 28 March 2026 04:15:02 +0000 (0:00:00.321) 0:00:05.459 ******** 2026-03-28 04:15:12.457473 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457477 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:15:12.457480 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:15:12.457484 | orchestrator | 2026-03-28 04:15:12.457488 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:15:12.457492 | orchestrator | Saturday 28 March 2026 04:15:02 +0000 (0:00:00.568) 0:00:06.028 ******** 2026-03-28 04:15:12.457495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457499 | orchestrator | 2026-03-28 04:15:12.457503 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:15:12.457507 | orchestrator | Saturday 28 March 2026 04:15:02 +0000 (0:00:00.258) 0:00:06.286 ******** 2026-03-28 04:15:12.457511 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457514 | orchestrator | 2026-03-28 04:15:12.457518 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:15:12.457525 | orchestrator | Saturday 28 March 2026 04:15:03 +0000 (0:00:00.298) 0:00:06.585 ******** 2026-03-28 04:15:12.457529 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457533 | orchestrator | 2026-03-28 04:15:12.457536 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457540 | orchestrator | Saturday 28 March 2026 04:15:03 +0000 (0:00:00.277) 0:00:06.862 ******** 2026-03-28 04:15:12.457544 | orchestrator | 2026-03-28 04:15:12.457548 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457551 | orchestrator | Saturday 28 March 2026 04:15:03 +0000 (0:00:00.071) 0:00:06.933 ******** 2026-03-28 04:15:12.457560 | orchestrator | 2026-03-28 04:15:12.457564 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457567 | orchestrator | Saturday 28 March 2026 04:15:03 +0000 (0:00:00.075) 0:00:07.009 ******** 2026-03-28 04:15:12.457571 | orchestrator | 2026-03-28 04:15:12.457575 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:15:12.457579 | orchestrator | Saturday 28 March 2026 04:15:03 +0000 (0:00:00.080) 0:00:07.090 ******** 2026-03-28 04:15:12.457583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457586 | orchestrator | 2026-03-28 04:15:12.457590 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 04:15:12.457594 | orchestrator | Saturday 28 March 2026 04:15:04 +0000 (0:00:00.290) 0:00:07.380 ******** 2026-03-28 04:15:12.457598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457601 | orchestrator | 2026-03-28 04:15:12.457618 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-28 04:15:12.457622 | orchestrator | Saturday 28 March 2026 04:15:04 +0000 (0:00:00.284) 0:00:07.665 ******** 2026-03-28 04:15:12.457627 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457631 | orchestrator | 2026-03-28 04:15:12.457635 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-28 04:15:12.457640 | orchestrator | Saturday 28 March 2026 04:15:04 +0000 (0:00:00.112) 0:00:07.778 ******** 2026-03-28 04:15:12.457644 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:15:12.457648 | orchestrator | 2026-03-28 04:15:12.457652 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-28 04:15:12.457657 | orchestrator | Saturday 28 March 2026 04:15:06 +0000 (0:00:02.103) 0:00:09.882 ******** 2026-03-28 04:15:12.457661 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457665 | orchestrator | 2026-03-28 04:15:12.457669 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-28 04:15:12.457675 | orchestrator | Saturday 28 March 2026 04:15:07 +0000 (0:00:00.452) 0:00:10.335 ******** 2026-03-28 04:15:12.457681 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457691 | orchestrator | 2026-03-28 04:15:12.457698 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-28 04:15:12.457704 | orchestrator | Saturday 28 March 2026 04:15:07 +0000 (0:00:00.343) 0:00:10.678 ******** 2026-03-28 04:15:12.457711 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457717 | orchestrator | 2026-03-28 04:15:12.457723 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-28 04:15:12.457729 | orchestrator | Saturday 28 March 2026 04:15:07 +0000 (0:00:00.151) 0:00:10.830 ******** 2026-03-28 04:15:12.457734 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:15:12.457740 | orchestrator | 2026-03-28 04:15:12.457746 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 04:15:12.457753 | orchestrator | Saturday 28 March 2026 04:15:07 +0000 (0:00:00.159) 0:00:10.990 ******** 2026-03-28 04:15:12.457759 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457765 | orchestrator | 2026-03-28 04:15:12.457771 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 04:15:12.457778 | orchestrator | Saturday 28 March 2026 04:15:07 +0000 (0:00:00.270) 0:00:11.261 ******** 2026-03-28 04:15:12.457784 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:15:12.457791 | orchestrator | 2026-03-28 04:15:12.457798 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:15:12.457804 | orchestrator | Saturday 28 March 2026 04:15:08 +0000 (0:00:00.272) 0:00:11.533 ******** 2026-03-28 04:15:12.457811 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457818 | orchestrator | 2026-03-28 04:15:12.457825 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:15:12.457832 | orchestrator | Saturday 28 March 2026 04:15:09 +0000 (0:00:01.342) 0:00:12.876 ******** 2026-03-28 04:15:12.457839 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457853 | orchestrator | 2026-03-28 04:15:12.457860 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:15:12.457867 | orchestrator | Saturday 28 March 2026 04:15:09 +0000 (0:00:00.268) 0:00:13.145 ******** 2026-03-28 04:15:12.457874 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457882 | orchestrator | 2026-03-28 04:15:12.457889 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457896 | orchestrator | Saturday 28 March 2026 04:15:10 +0000 (0:00:00.264) 0:00:13.409 ******** 2026-03-28 04:15:12.457903 | orchestrator | 2026-03-28 04:15:12.457910 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457917 | orchestrator | Saturday 28 March 2026 04:15:10 +0000 (0:00:00.071) 0:00:13.481 ******** 2026-03-28 04:15:12.457927 | orchestrator | 2026-03-28 04:15:12.457935 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:12.457942 | orchestrator | Saturday 28 March 2026 04:15:10 +0000 (0:00:00.090) 0:00:13.571 ******** 2026-03-28 04:15:12.457948 | orchestrator | 2026-03-28 04:15:12.457954 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 04:15:12.457961 | orchestrator | Saturday 28 March 2026 04:15:10 +0000 (0:00:00.299) 0:00:13.870 ******** 2026-03-28 04:15:12.457967 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:12.457973 | orchestrator | 2026-03-28 04:15:12.457984 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:15:12.457991 | orchestrator | Saturday 28 March 2026 04:15:12 +0000 (0:00:01.444) 0:00:15.315 ******** 2026-03-28 04:15:12.457998 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 04:15:12.458004 | orchestrator |  "msg": [ 2026-03-28 04:15:12.458011 | orchestrator |  "Validator run completed.", 2026-03-28 04:15:12.458095 | orchestrator |  "You can find the report file here:", 2026-03-28 04:15:12.458100 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-28T04:14:57+00:00-report.json", 2026-03-28 04:15:12.458105 | orchestrator |  "on the following host:", 2026-03-28 04:15:12.458111 | orchestrator |  "testbed-manager" 2026-03-28 04:15:12.458117 | orchestrator |  ] 2026-03-28 04:15:12.458123 | orchestrator | } 2026-03-28 04:15:12.458130 | orchestrator | 2026-03-28 04:15:12.458136 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:15:12.458144 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 04:15:12.458152 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:15:12.458169 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:15:12.865135 | orchestrator | 2026-03-28 04:15:12.865216 | orchestrator | 2026-03-28 04:15:12.865222 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:15:12.865229 | orchestrator | Saturday 28 March 2026 04:15:12 +0000 (0:00:00.439) 0:00:15.754 ******** 2026-03-28 04:15:12.865233 | orchestrator | =============================================================================== 2026-03-28 04:15:12.865237 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.10s 2026-03-28 04:15:12.865242 | orchestrator | Write report file ------------------------------------------------------- 1.44s 2026-03-28 04:15:12.865246 | orchestrator | Aggregate test results step one ----------------------------------------- 1.34s 2026-03-28 04:15:12.865250 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2026-03-28 04:15:12.865254 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-03-28 04:15:12.865257 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-28 04:15:12.865281 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.57s 2026-03-28 04:15:12.865285 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-03-28 04:15:12.865290 | orchestrator | Flush handlers ---------------------------------------------------------- 0.46s 2026-03-28 04:15:12.865294 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.45s 2026-03-28 04:15:12.865298 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-03-28 04:15:12.865302 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2026-03-28 04:15:12.865306 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-03-28 04:15:12.865309 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-03-28 04:15:12.865314 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-03-28 04:15:12.865317 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2026-03-28 04:15:12.865321 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-03-28 04:15:12.865325 | orchestrator | Print report file information ------------------------------------------- 0.29s 2026-03-28 04:15:12.865329 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2026-03-28 04:15:12.865333 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-03-28 04:15:13.262196 | orchestrator | + osism validate ceph-osds 2026-03-28 04:15:35.376152 | orchestrator | 2026-03-28 04:15:35.376262 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-28 04:15:35.376273 | orchestrator | 2026-03-28 04:15:35.376279 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 04:15:35.376285 | orchestrator | Saturday 28 March 2026 04:15:30 +0000 (0:00:00.480) 0:00:00.480 ******** 2026-03-28 04:15:35.376291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:35.376297 | orchestrator | 2026-03-28 04:15:35.376303 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 04:15:35.376309 | orchestrator | Saturday 28 March 2026 04:15:31 +0000 (0:00:00.888) 0:00:01.368 ******** 2026-03-28 04:15:35.376315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:35.376320 | orchestrator | 2026-03-28 04:15:35.376326 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 04:15:35.376331 | orchestrator | Saturday 28 March 2026 04:15:31 +0000 (0:00:00.637) 0:00:02.006 ******** 2026-03-28 04:15:35.376382 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:35.376392 | orchestrator | 2026-03-28 04:15:35.376403 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 04:15:35.376409 | orchestrator | Saturday 28 March 2026 04:15:32 +0000 (0:00:00.766) 0:00:02.773 ******** 2026-03-28 04:15:35.376415 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:35.376422 | orchestrator | 2026-03-28 04:15:35.376429 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 04:15:35.376438 | orchestrator | Saturday 28 March 2026 04:15:32 +0000 (0:00:00.141) 0:00:02.914 ******** 2026-03-28 04:15:35.376447 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:35.376456 | orchestrator | 2026-03-28 04:15:35.376465 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 04:15:35.376474 | orchestrator | Saturday 28 March 2026 04:15:33 +0000 (0:00:00.158) 0:00:03.073 ******** 2026-03-28 04:15:35.376483 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:35.376491 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:35.376500 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:35.376508 | orchestrator | 2026-03-28 04:15:35.376517 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 04:15:35.376525 | orchestrator | Saturday 28 March 2026 04:15:33 +0000 (0:00:00.353) 0:00:03.427 ******** 2026-03-28 04:15:35.376554 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:35.376565 | orchestrator | 2026-03-28 04:15:35.376574 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 04:15:35.376583 | orchestrator | Saturday 28 March 2026 04:15:33 +0000 (0:00:00.154) 0:00:03.582 ******** 2026-03-28 04:15:35.376592 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:35.376600 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:35.376606 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:35.376611 | orchestrator | 2026-03-28 04:15:35.376617 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-28 04:15:35.376622 | orchestrator | Saturday 28 March 2026 04:15:33 +0000 (0:00:00.340) 0:00:03.923 ******** 2026-03-28 04:15:35.376628 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:35.376633 | orchestrator | 2026-03-28 04:15:35.376639 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:35.376644 | orchestrator | Saturday 28 March 2026 04:15:34 +0000 (0:00:00.842) 0:00:04.765 ******** 2026-03-28 04:15:35.376649 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:35.376655 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:35.376660 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:35.376667 | orchestrator | 2026-03-28 04:15:35.376675 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-28 04:15:35.376683 | orchestrator | Saturday 28 March 2026 04:15:35 +0000 (0:00:00.334) 0:00:05.100 ******** 2026-03-28 04:15:35.376700 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77d8319de100a1d494a3af54728a5a90ad4e6d1860acf5d8359abc1a942eccac', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-28 04:15:35.376714 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7e9f92c54b1226e56c71eb70f9b7cdad1c74e3a444a3d03e19dcd3d0e7081b63', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-28 04:15:35.376724 | orchestrator | skipping: [testbed-node-3] => (item={'id': '53a7242ef8cdd448bab3a847a51d51f1b8c2e4af2d20856fc4db8a7522521837', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-28 04:15:35.376734 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a6e350df65e7e435544255e8e8dd2bfe1e88a4cdd4341411aec522e74cd5a81', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-03-28 04:15:35.376742 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8a716008d0db2d26762fa262ce10cc96b6dc1b9af97c6c447a8c34f14ca86c1b', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-28 04:15:35.376818 | orchestrator | skipping: [testbed-node-3] => (item={'id': '546f3c4f3ad82449b6f0c171cc78d638d288553588a6e4e5e19b63f56bdb69cf', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.376831 | orchestrator | skipping: [testbed-node-3] => (item={'id': '417a022e7cee34e04079d420753197e6516d834f807b5dd36d1aa0a095d97cf8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.376841 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f6c343cba85c1373a280caa8a41c86413d05a7d3db37cae38a8804f62eca7b1e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-03-28 04:15:35.376858 | orchestrator | skipping: [testbed-node-3] => (item={'id': '725992ded4cd33dd853e4e309d17640db87a4c8346b2253a25240740efcae5e1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.376873 | orchestrator | skipping: [testbed-node-3] => (item={'id': '58bd82db631137ac96f13fa79b5ba28e14d726749122c025b21cda636528df10', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.376880 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7032277d7f43fdec28dbab6a19f952869fd863d1a722ff840af74dcec1eefb11', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.376889 | orchestrator | ok: [testbed-node-3] => (item={'id': '90eb95a02bcc7396a7cdc87affb48a55249a6e02f9d2c7924d1344dacd4a8ff8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:35.376908 | orchestrator | ok: [testbed-node-3] => (item={'id': '72388330d3ef5e5d86d3281bfbec3e9047fafff8af1828544b0da2c5913594eb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:35.376921 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1db98901fe48b8884a081d560d4d4887eeca00fd377749cb337d12ce51b6ea06', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.376928 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd2af53b96b8a99d62e09d07ba13b611f183a770a507031367d8d91b396810d79', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:35.376934 | orchestrator | skipping: [testbed-node-3] => (item={'id': '92d05a728ee7d0c2cb2dbfac65d32823fe6da1ccae69a7774f08ab001f29d173', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:35.376941 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8fc0367b3d6879a3ea1b3d63df31959bbcb3e9190c39a7909396948531a0008', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.376947 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8db20355559308ab0a6f50f6d9e660b9118c297dc892a025fcac5bc34691b2a2', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.376955 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c8ab1b6187c89279f6c9654df98f62fae3db1f7cda54bea375027fe2d7612051', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.376961 | orchestrator | skipping: [testbed-node-4] => (item={'id': '25b55bc3e541fcb827e48451cb53427e26da901725f64deeca291d16fb5d566c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-28 04:15:35.376975 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f18bd10432c8cc5123078eac5354a2b6264d336285ebe73c32adf0da40c2255', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-28 04:15:35.663489 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0dacdcc4ac0918a670d73062437321cb5a7aa2aeabe7ca92a331614d81e87fde', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-28 04:15:35.663613 | orchestrator | skipping: [testbed-node-4] => (item={'id': '783d40fd9f54f0a4340ee8ae7fbb62654a47f347a90ccebbce991cf6007557b1', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-03-28 04:15:35.663626 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d0d5cb10a7c62a5125524d06e5cde4f1fe43bfb0fdf8e6b42acb66f192ebb25', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-28 04:15:35.663654 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3392ff8db5bcdf2181b7a19226ee8c338e565787149b1a6edd1bf726881181b4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.663664 | orchestrator | skipping: [testbed-node-4] => (item={'id': '232ad978721d6ea337e78d32ba56c79431669dd05f4378e741dca0cfa063b032', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.663674 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8c7336f77b5c07e552700cd1b2d3368fd10edd75b0fb40f6aebfc5744590eed2', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-03-28 04:15:35.663684 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a05df176fdb65f842d2a0b7ec3e4dde8f9e22c1397b734b178790f28ac85607a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663695 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eceb4e0436966475952293124dd8b45b7225134f9cdcb84c73f09093737efc55', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663706 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5556da466fffdf40a9fa695e929814525844c230342d85f27c88c847a769197f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663718 | orchestrator | ok: [testbed-node-4] => (item={'id': '4ff3971b18d2cd4eb76f15350efb3d2829d20907d34806557200cc0fe90f5a73', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:35.663729 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b51f43e411b8b9f8459f9fa03259311b3dcb81013f0f1a0f1025ecb23401bfb2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:35.663739 | orchestrator | skipping: [testbed-node-4] => (item={'id': '422f898ede67e79f55e004fb31ce9cc1f9b59a613964d09282770a159990fc13', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663749 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba61603daa20b8e909fa456bbfc6ce39662b18ef712f6806a9948c689c77d8b8', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:35.663759 | orchestrator | skipping: [testbed-node-4] => (item={'id': '64b1758a0fffcf5bbca9293799faba2a4ffd09122c81a80cd4c510c5d3a0ab5c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:35.663790 | orchestrator | skipping: [testbed-node-4] => (item={'id': '142ef7295688768b158a9dc2e9f43fa3e184975c30c2014e0ad78ad1c6d10a67', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.663800 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a50c904b01c19e03f7c1279f08d4df4c4f7eff8815da93fd343be5d1dff84cd', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.663810 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e5b6425e806491faf4cebcc219df08b14147063ac7c9845831ac21a031a97e82', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:35.663820 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2aad5d3ce70ab7d5abfc2e82741cf50cd0654c040676bced28d7f8e3dffc25ee', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-28 04:15:35.663834 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a69722e4813c263d84f92d219d155850d19f824a47eae01ef6c89f5a0a927c37', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-28 04:15:35.663844 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a0982bbbc51591486b2ca7580a3ed27ddc1fd69d6e5a28ac4ae3f49e77f9324', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-03-28 04:15:35.663854 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8807f557b2407f518b3c647a0f40d719b20a723100a8d4041e14b122b109fff', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-03-28 04:15:35.663864 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0bf43d1d166742e7c5d4591f7c43f6f7b0bdbe7e464b8c45bd606dbb9274ee42', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-28 04:15:35.663874 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2ea74260e4c317b27c2fd96abf3d35066fd0d4df2f55fcfccdb395f7e36cdc2e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.663884 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f685d1c06fae23d6e5e388f95d2908c875a16e3d9f5a32357609cbcdeff9aba5', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-28 04:15:35.663893 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dece338c4c2e02869c095a564531c8b0d773d4a0c841b7aabbd2d657c9271252', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-03-28 04:15:35.663903 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b53706e0933f4618500c1c06860ffbb9bcdd8b015404355cf61bdec584fc0edc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663913 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f21ab3ff8aef994d89853af37ac7fcf4c197b257e2a280099dd0c9a8eed2876', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663923 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7107b261cd24da334bbc93e8849f252962ccb56233ac8756070cb8c77be96462', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:35.663939 | orchestrator | ok: [testbed-node-5] => (item={'id': '51d5c35e723842e51daa316f55930cbbc1484978e5a23f79ab9b269dd03c3c16', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:35.663957 | orchestrator | ok: [testbed-node-5] => (item={'id': '72d0c16fe946f5ccddcf7adfdbf4bc469afaa794f7df42a89e8f0d9f70211f91', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-28 04:15:47.672949 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac395c44d628fbf0bd4e702a7e6b71a1058948e4f48e59ea4cc232fa0259c947', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-28 04:15:47.673053 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c8994d2a6cfd530b174a46dae9127609e2fcb8845b83aa2db1233bb84c067b54', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:47.673064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e7423d376b08966c8362d91a97b1fb191a7589cf63a8c2235250317a6ef3fa3', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-28 04:15:47.673070 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1cad50401e0c475e99170c944f0d03c3ef68a93e78ef9fc7105ec770a7e79027', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:47.673077 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4ac81dc47b651911903fca876106e8fa77a33a08a0b735da479bda32a915d5c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:47.673082 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f423acf383f7b3576893deb7c43c26dd205d56af220ec040d3a840c5e87c2fb', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-28 04:15:47.673086 | orchestrator | 2026-03-28 04:15:47.673090 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-28 04:15:47.673095 | orchestrator | Saturday 28 March 2026 04:15:35 +0000 (0:00:00.589) 0:00:05.689 ******** 2026-03-28 04:15:47.673099 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673104 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673108 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673111 | orchestrator | 2026-03-28 04:15:47.673115 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-28 04:15:47.673119 | orchestrator | Saturday 28 March 2026 04:15:36 +0000 (0:00:00.355) 0:00:06.045 ******** 2026-03-28 04:15:47.673123 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:47.673131 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:47.673135 | orchestrator | 2026-03-28 04:15:47.673139 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-28 04:15:47.673143 | orchestrator | Saturday 28 March 2026 04:15:36 +0000 (0:00:00.559) 0:00:06.604 ******** 2026-03-28 04:15:47.673147 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673150 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673154 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673158 | orchestrator | 2026-03-28 04:15:47.673162 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:47.673180 | orchestrator | Saturday 28 March 2026 04:15:36 +0000 (0:00:00.337) 0:00:06.942 ******** 2026-03-28 04:15:47.673184 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673188 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673191 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673195 | orchestrator | 2026-03-28 04:15:47.673199 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-28 04:15:47.673203 | orchestrator | Saturday 28 March 2026 04:15:37 +0000 (0:00:00.361) 0:00:07.304 ******** 2026-03-28 04:15:47.673219 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-28 04:15:47.673224 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-28 04:15:47.673228 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673232 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-28 04:15:47.673236 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-28 04:15:47.673240 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:47.673243 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-28 04:15:47.673247 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-28 04:15:47.673251 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:47.673255 | orchestrator | 2026-03-28 04:15:47.673258 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-28 04:15:47.673262 | orchestrator | Saturday 28 March 2026 04:15:37 +0000 (0:00:00.325) 0:00:07.629 ******** 2026-03-28 04:15:47.673266 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673270 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673274 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673277 | orchestrator | 2026-03-28 04:15:47.673281 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 04:15:47.673285 | orchestrator | Saturday 28 March 2026 04:15:38 +0000 (0:00:00.521) 0:00:08.151 ******** 2026-03-28 04:15:47.673289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:47.673308 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:47.673312 | orchestrator | 2026-03-28 04:15:47.673316 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 04:15:47.673320 | orchestrator | Saturday 28 March 2026 04:15:38 +0000 (0:00:00.321) 0:00:08.472 ******** 2026-03-28 04:15:47.673323 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673327 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:47.673331 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:47.673335 | orchestrator | 2026-03-28 04:15:47.673338 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-28 04:15:47.673342 | orchestrator | Saturday 28 March 2026 04:15:38 +0000 (0:00:00.345) 0:00:08.818 ******** 2026-03-28 04:15:47.673346 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673349 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673353 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673357 | orchestrator | 2026-03-28 04:15:47.673361 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:15:47.673364 | orchestrator | Saturday 28 March 2026 04:15:39 +0000 (0:00:00.322) 0:00:09.141 ******** 2026-03-28 04:15:47.673368 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673372 | orchestrator | 2026-03-28 04:15:47.673378 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:15:47.673382 | orchestrator | Saturday 28 March 2026 04:15:39 +0000 (0:00:00.703) 0:00:09.844 ******** 2026-03-28 04:15:47.673386 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673389 | orchestrator | 2026-03-28 04:15:47.673393 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:15:47.673401 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.253) 0:00:10.098 ******** 2026-03-28 04:15:47.673405 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673409 | orchestrator | 2026-03-28 04:15:47.673413 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:47.673416 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.297) 0:00:10.395 ******** 2026-03-28 04:15:47.673420 | orchestrator | 2026-03-28 04:15:47.673424 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:47.673428 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.081) 0:00:10.476 ******** 2026-03-28 04:15:47.673431 | orchestrator | 2026-03-28 04:15:47.673435 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:47.673439 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.070) 0:00:10.547 ******** 2026-03-28 04:15:47.673442 | orchestrator | 2026-03-28 04:15:47.673446 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:15:47.673450 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.071) 0:00:10.618 ******** 2026-03-28 04:15:47.673454 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673457 | orchestrator | 2026-03-28 04:15:47.673461 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-28 04:15:47.673465 | orchestrator | Saturday 28 March 2026 04:15:40 +0000 (0:00:00.249) 0:00:10.868 ******** 2026-03-28 04:15:47.673469 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673472 | orchestrator | 2026-03-28 04:15:47.673476 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:47.673480 | orchestrator | Saturday 28 March 2026 04:15:41 +0000 (0:00:00.268) 0:00:11.136 ******** 2026-03-28 04:15:47.673484 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673487 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673491 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673495 | orchestrator | 2026-03-28 04:15:47.673498 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-28 04:15:47.673502 | orchestrator | Saturday 28 March 2026 04:15:41 +0000 (0:00:00.318) 0:00:11.455 ******** 2026-03-28 04:15:47.673506 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673510 | orchestrator | 2026-03-28 04:15:47.673513 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-28 04:15:47.673517 | orchestrator | Saturday 28 March 2026 04:15:42 +0000 (0:00:00.706) 0:00:12.162 ******** 2026-03-28 04:15:47.673521 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 04:15:47.673525 | orchestrator | 2026-03-28 04:15:47.673528 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-28 04:15:47.673532 | orchestrator | Saturday 28 March 2026 04:15:43 +0000 (0:00:01.665) 0:00:13.827 ******** 2026-03-28 04:15:47.673536 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673540 | orchestrator | 2026-03-28 04:15:47.673543 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-28 04:15:47.673547 | orchestrator | Saturday 28 March 2026 04:15:43 +0000 (0:00:00.167) 0:00:13.994 ******** 2026-03-28 04:15:47.673551 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673555 | orchestrator | 2026-03-28 04:15:47.673558 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-28 04:15:47.673562 | orchestrator | Saturday 28 March 2026 04:15:44 +0000 (0:00:00.361) 0:00:14.356 ******** 2026-03-28 04:15:47.673566 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:47.673569 | orchestrator | 2026-03-28 04:15:47.673573 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-28 04:15:47.673577 | orchestrator | Saturday 28 March 2026 04:15:44 +0000 (0:00:00.147) 0:00:14.503 ******** 2026-03-28 04:15:47.673581 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673584 | orchestrator | 2026-03-28 04:15:47.673588 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:47.673592 | orchestrator | Saturday 28 March 2026 04:15:44 +0000 (0:00:00.207) 0:00:14.711 ******** 2026-03-28 04:15:47.673599 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:47.673603 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:47.673606 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:47.673610 | orchestrator | 2026-03-28 04:15:47.673614 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-28 04:15:47.673618 | orchestrator | Saturday 28 March 2026 04:15:44 +0000 (0:00:00.310) 0:00:15.021 ******** 2026-03-28 04:15:47.673621 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:15:47.673625 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:15:47.673629 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:15:58.775586 | orchestrator | 2026-03-28 04:15:58.775683 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-28 04:15:58.775693 | orchestrator | Saturday 28 March 2026 04:15:47 +0000 (0:00:02.678) 0:00:17.699 ******** 2026-03-28 04:15:58.775699 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.775706 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.775712 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.775718 | orchestrator | 2026-03-28 04:15:58.775724 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-28 04:15:58.775730 | orchestrator | Saturday 28 March 2026 04:15:48 +0000 (0:00:00.360) 0:00:18.060 ******** 2026-03-28 04:15:58.775736 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.775742 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.775748 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.775753 | orchestrator | 2026-03-28 04:15:58.775759 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-28 04:15:58.775765 | orchestrator | Saturday 28 March 2026 04:15:48 +0000 (0:00:00.577) 0:00:18.638 ******** 2026-03-28 04:15:58.775771 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:58.775778 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:58.775783 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:58.775789 | orchestrator | 2026-03-28 04:15:58.775795 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-28 04:15:58.775813 | orchestrator | Saturday 28 March 2026 04:15:48 +0000 (0:00:00.337) 0:00:18.975 ******** 2026-03-28 04:15:58.775819 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.775825 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.775831 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.775836 | orchestrator | 2026-03-28 04:15:58.775842 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-28 04:15:58.775848 | orchestrator | Saturday 28 March 2026 04:15:49 +0000 (0:00:00.594) 0:00:19.570 ******** 2026-03-28 04:15:58.775854 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:58.775859 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:58.775865 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:58.775871 | orchestrator | 2026-03-28 04:15:58.775877 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-28 04:15:58.775883 | orchestrator | Saturday 28 March 2026 04:15:49 +0000 (0:00:00.333) 0:00:19.903 ******** 2026-03-28 04:15:58.775889 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:58.775895 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:58.775901 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:58.775907 | orchestrator | 2026-03-28 04:15:58.775912 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 04:15:58.775918 | orchestrator | Saturday 28 March 2026 04:15:50 +0000 (0:00:00.304) 0:00:20.208 ******** 2026-03-28 04:15:58.775924 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.775930 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.775984 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.775990 | orchestrator | 2026-03-28 04:15:58.775996 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-28 04:15:58.776002 | orchestrator | Saturday 28 March 2026 04:15:50 +0000 (0:00:00.521) 0:00:20.730 ******** 2026-03-28 04:15:58.776010 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.776045 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.776058 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.776068 | orchestrator | 2026-03-28 04:15:58.776077 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-28 04:15:58.776086 | orchestrator | Saturday 28 March 2026 04:15:51 +0000 (0:00:00.809) 0:00:21.539 ******** 2026-03-28 04:15:58.776096 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.776107 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.776116 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.776126 | orchestrator | 2026-03-28 04:15:58.776137 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-28 04:15:58.776147 | orchestrator | Saturday 28 March 2026 04:15:51 +0000 (0:00:00.336) 0:00:21.875 ******** 2026-03-28 04:15:58.776157 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:58.776167 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:15:58.776179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:15:58.776189 | orchestrator | 2026-03-28 04:15:58.776199 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-28 04:15:58.776210 | orchestrator | Saturday 28 March 2026 04:15:52 +0000 (0:00:00.340) 0:00:22.216 ******** 2026-03-28 04:15:58.776221 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:15:58.776228 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:15:58.776235 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:15:58.776241 | orchestrator | 2026-03-28 04:15:58.776248 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 04:15:58.776254 | orchestrator | Saturday 28 March 2026 04:15:52 +0000 (0:00:00.648) 0:00:22.864 ******** 2026-03-28 04:15:58.776261 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:58.776268 | orchestrator | 2026-03-28 04:15:58.776275 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 04:15:58.776281 | orchestrator | Saturday 28 March 2026 04:15:53 +0000 (0:00:00.294) 0:00:23.158 ******** 2026-03-28 04:15:58.776288 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:15:58.776294 | orchestrator | 2026-03-28 04:15:58.776301 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 04:15:58.776308 | orchestrator | Saturday 28 March 2026 04:15:53 +0000 (0:00:00.280) 0:00:23.439 ******** 2026-03-28 04:15:58.776314 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:58.776320 | orchestrator | 2026-03-28 04:15:58.776327 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 04:15:58.776333 | orchestrator | Saturday 28 March 2026 04:15:55 +0000 (0:00:01.816) 0:00:25.255 ******** 2026-03-28 04:15:58.776339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:58.776346 | orchestrator | 2026-03-28 04:15:58.776352 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 04:15:58.776359 | orchestrator | Saturday 28 March 2026 04:15:55 +0000 (0:00:00.314) 0:00:25.569 ******** 2026-03-28 04:15:58.776365 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:58.776372 | orchestrator | 2026-03-28 04:15:58.776393 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:58.776400 | orchestrator | Saturday 28 March 2026 04:15:55 +0000 (0:00:00.278) 0:00:25.847 ******** 2026-03-28 04:15:58.776407 | orchestrator | 2026-03-28 04:15:58.776413 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:58.776420 | orchestrator | Saturday 28 March 2026 04:15:55 +0000 (0:00:00.106) 0:00:25.953 ******** 2026-03-28 04:15:58.776427 | orchestrator | 2026-03-28 04:15:58.776432 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 04:15:58.776438 | orchestrator | Saturday 28 March 2026 04:15:55 +0000 (0:00:00.085) 0:00:26.038 ******** 2026-03-28 04:15:58.776444 | orchestrator | 2026-03-28 04:15:58.776449 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 04:15:58.776455 | orchestrator | Saturday 28 March 2026 04:15:56 +0000 (0:00:00.100) 0:00:26.139 ******** 2026-03-28 04:15:58.776467 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 04:15:58.776473 | orchestrator | 2026-03-28 04:15:58.776479 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 04:15:58.776484 | orchestrator | Saturday 28 March 2026 04:15:57 +0000 (0:00:01.656) 0:00:27.795 ******** 2026-03-28 04:15:58.776495 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-28 04:15:58.776501 | orchestrator |  "msg": [ 2026-03-28 04:15:58.776508 | orchestrator |  "Validator run completed.", 2026-03-28 04:15:58.776514 | orchestrator |  "You can find the report file here:", 2026-03-28 04:15:58.776519 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-28T04:15:31+00:00-report.json", 2026-03-28 04:15:58.776526 | orchestrator |  "on the following host:", 2026-03-28 04:15:58.776532 | orchestrator |  "testbed-manager" 2026-03-28 04:15:58.776537 | orchestrator |  ] 2026-03-28 04:15:58.776543 | orchestrator | } 2026-03-28 04:15:58.776549 | orchestrator | 2026-03-28 04:15:58.776555 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:15:58.776562 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 04:15:58.776569 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 04:15:58.776574 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 04:15:58.776580 | orchestrator | 2026-03-28 04:15:58.776586 | orchestrator | 2026-03-28 04:15:58.776592 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:15:58.776598 | orchestrator | Saturday 28 March 2026 04:15:58 +0000 (0:00:00.622) 0:00:28.417 ******** 2026-03-28 04:15:58.776603 | orchestrator | =============================================================================== 2026-03-28 04:15:58.776609 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.68s 2026-03-28 04:15:58.776615 | orchestrator | Aggregate test results step one ----------------------------------------- 1.82s 2026-03-28 04:15:58.776620 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.67s 2026-03-28 04:15:58.776626 | orchestrator | Write report file ------------------------------------------------------- 1.66s 2026-03-28 04:15:58.776632 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-03-28 04:15:58.776637 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.84s 2026-03-28 04:15:58.776643 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.81s 2026-03-28 04:15:58.776648 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-03-28 04:15:58.776654 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.71s 2026-03-28 04:15:58.776660 | orchestrator | Aggregate test results step one ----------------------------------------- 0.70s 2026-03-28 04:15:58.776665 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.65s 2026-03-28 04:15:58.776671 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.64s 2026-03-28 04:15:58.776677 | orchestrator | Print report file information ------------------------------------------- 0.62s 2026-03-28 04:15:58.776682 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.59s 2026-03-28 04:15:58.776688 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.59s 2026-03-28 04:15:58.776693 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.58s 2026-03-28 04:15:58.776699 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-03-28 04:15:58.776705 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-03-28 04:15:58.776715 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-03-28 04:15:58.776721 | orchestrator | Get OSDs that are not up or in ------------------------------------------ 0.36s 2026-03-28 04:15:59.126607 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-28 04:15:59.135595 | orchestrator | + set -e 2026-03-28 04:15:59.135693 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 04:15:59.135717 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 04:15:59.135736 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 04:15:59.135763 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 04:15:59.135786 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 04:15:59.135803 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 04:15:59.135822 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 04:15:59.135841 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:15:59.135859 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:15:59.135877 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 04:15:59.135894 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 04:15:59.135913 | orchestrator | ++ export ARA=false 2026-03-28 04:15:59.136014 | orchestrator | ++ ARA=false 2026-03-28 04:15:59.136037 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 04:15:59.136049 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 04:15:59.136060 | orchestrator | ++ export TEMPEST=false 2026-03-28 04:15:59.136071 | orchestrator | ++ TEMPEST=false 2026-03-28 04:15:59.136081 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 04:15:59.136092 | orchestrator | ++ IS_ZUUL=true 2026-03-28 04:15:59.136103 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:15:59.136115 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:15:59.136126 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 04:15:59.136136 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 04:15:59.136148 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 04:15:59.136162 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 04:15:59.136174 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 04:15:59.136187 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 04:15:59.136200 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 04:15:59.136212 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 04:15:59.136225 | orchestrator | + source /etc/os-release 2026-03-28 04:15:59.136237 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-28 04:15:59.136250 | orchestrator | ++ NAME=Ubuntu 2026-03-28 04:15:59.136262 | orchestrator | ++ VERSION_ID=24.04 2026-03-28 04:15:59.136276 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-28 04:15:59.136288 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-28 04:15:59.136301 | orchestrator | ++ ID=ubuntu 2026-03-28 04:15:59.136312 | orchestrator | ++ ID_LIKE=debian 2026-03-28 04:15:59.136325 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-28 04:15:59.136337 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-28 04:15:59.136349 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-28 04:15:59.136361 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-28 04:15:59.136375 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-28 04:15:59.136388 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-28 04:15:59.136400 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-28 04:15:59.136413 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-28 04:15:59.136427 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 04:15:59.175784 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 04:16:25.584078 | orchestrator | 2026-03-28 04:16:25.584237 | orchestrator | # Status of Elasticsearch 2026-03-28 04:16:25.584265 | orchestrator | 2026-03-28 04:16:25.584288 | orchestrator | + pushd /opt/configuration/contrib 2026-03-28 04:16:25.584311 | orchestrator | + echo 2026-03-28 04:16:25.584333 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-28 04:16:25.584354 | orchestrator | + echo 2026-03-28 04:16:25.584375 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-28 04:16:25.799079 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-28 04:16:25.799248 | orchestrator | 2026-03-28 04:16:25.799275 | orchestrator | # Status of MariaDB 2026-03-28 04:16:25.799295 | orchestrator | + echo 2026-03-28 04:16:25.799311 | orchestrator | + echo '# Status of MariaDB' 2026-03-28 04:16:25.799326 | orchestrator | 2026-03-28 04:16:25.799341 | orchestrator | + echo 2026-03-28 04:16:25.799862 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 04:16:25.878863 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 04:16:25.879048 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 04:16:25.879064 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-28 04:16:25.879078 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-28 04:16:25.969079 | orchestrator | Reading package lists... 2026-03-28 04:16:26.443264 | orchestrator | Building dependency tree... 2026-03-28 04:16:26.444138 | orchestrator | Reading state information... 2026-03-28 04:16:27.159504 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-28 04:16:27.159635 | orchestrator | bc set to manually installed. 2026-03-28 04:16:27.159662 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-28 04:16:27.958744 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-28 04:16:27.959445 | orchestrator | 2026-03-28 04:16:27.959486 | orchestrator | # Status of Prometheus 2026-03-28 04:16:27.959501 | orchestrator | 2026-03-28 04:16:27.959512 | orchestrator | + echo 2026-03-28 04:16:27.959524 | orchestrator | + echo '# Status of Prometheus' 2026-03-28 04:16:27.959535 | orchestrator | + echo 2026-03-28 04:16:27.959547 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-28 04:16:28.030636 | orchestrator | Unauthorized 2026-03-28 04:16:28.035809 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-28 04:16:28.102259 | orchestrator | Unauthorized 2026-03-28 04:16:28.105329 | orchestrator | 2026-03-28 04:16:28.105408 | orchestrator | # Status of RabbitMQ 2026-03-28 04:16:28.105419 | orchestrator | 2026-03-28 04:16:28.105426 | orchestrator | + echo 2026-03-28 04:16:28.105433 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-28 04:16:28.105439 | orchestrator | + echo 2026-03-28 04:16:28.106217 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-28 04:16:28.160098 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 04:16:28.160186 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 04:16:28.160200 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-28 04:16:28.672232 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-28 04:16:28.689369 | orchestrator | 2026-03-28 04:16:28.689457 | orchestrator | # Status of Redis 2026-03-28 04:16:28.689469 | orchestrator | 2026-03-28 04:16:28.689479 | orchestrator | + echo 2026-03-28 04:16:28.689492 | orchestrator | + echo '# Status of Redis' 2026-03-28 04:16:28.689508 | orchestrator | + echo 2026-03-28 04:16:28.689523 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-28 04:16:28.695031 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002268s;;;0.000000;10.000000 2026-03-28 04:16:28.695115 | orchestrator | + popd 2026-03-28 04:16:28.695131 | orchestrator | 2026-03-28 04:16:28.695143 | orchestrator | # Create backup of MariaDB database 2026-03-28 04:16:28.695155 | orchestrator | 2026-03-28 04:16:28.695166 | orchestrator | + echo 2026-03-28 04:16:28.695177 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-28 04:16:28.695188 | orchestrator | + echo 2026-03-28 04:16:28.695200 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-28 04:16:30.961336 | orchestrator | 2026-03-28 04:16:30 | INFO  | Task 14e417b0-c527-46a2-9778-bc548bf42f94 (mariadb_backup) was prepared for execution. 2026-03-28 04:16:30.961447 | orchestrator | 2026-03-28 04:16:30 | INFO  | It takes a moment until task 14e417b0-c527-46a2-9778-bc548bf42f94 (mariadb_backup) has been started and output is visible here. 2026-03-28 04:20:38.287188 | orchestrator | 2026-03-28 04:20:38.287281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:20:38.287292 | orchestrator | 2026-03-28 04:20:38.287316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:20:38.287324 | orchestrator | Saturday 28 March 2026 04:16:35 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-03-28 04:20:38.287331 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:20:38.287357 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:20:38.287409 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:20:38.287416 | orchestrator | 2026-03-28 04:20:38.287423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:20:38.287430 | orchestrator | Saturday 28 March 2026 04:16:35 +0000 (0:00:00.359) 0:00:00.612 ******** 2026-03-28 04:20:38.287437 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 04:20:38.287445 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 04:20:38.287451 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 04:20:38.287458 | orchestrator | 2026-03-28 04:20:38.287465 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 04:20:38.287471 | orchestrator | 2026-03-28 04:20:38.287478 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 04:20:38.287485 | orchestrator | Saturday 28 March 2026 04:16:36 +0000 (0:00:00.703) 0:00:01.316 ******** 2026-03-28 04:20:38.287491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 04:20:38.287498 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 04:20:38.287505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 04:20:38.287511 | orchestrator | 2026-03-28 04:20:38.287518 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 04:20:38.287528 | orchestrator | Saturday 28 March 2026 04:16:37 +0000 (0:00:00.471) 0:00:01.787 ******** 2026-03-28 04:20:38.287536 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:20:38.287544 | orchestrator | 2026-03-28 04:20:38.287550 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-28 04:20:38.287558 | orchestrator | Saturday 28 March 2026 04:16:37 +0000 (0:00:00.824) 0:00:02.612 ******** 2026-03-28 04:20:38.287564 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:20:38.287571 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:20:38.287578 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:20:38.287584 | orchestrator | 2026-03-28 04:20:38.287591 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-28 04:20:38.287599 | orchestrator | Saturday 28 March 2026 04:16:41 +0000 (0:00:03.373) 0:00:05.986 ******** 2026-03-28 04:20:38.287610 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:20:38.287621 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:20:38.287633 | orchestrator | 2026-03-28 04:20:38.287643 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-28 04:20:38.287654 | orchestrator | 2026-03-28 04:20:38.287665 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-28 04:20:38.287678 | orchestrator | 2026-03-28 04:20:38.287690 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-28 04:20:38.287702 | orchestrator | 2026-03-28 04:20:38.287714 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-28 04:20:38.287722 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 04:20:38.287730 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-28 04:20:38.287737 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 04:20:38.287745 | orchestrator | mariadb_bootstrap_restart 2026-03-28 04:20:38.287753 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:20:38.287761 | orchestrator | 2026-03-28 04:20:38.287769 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 04:20:38.287777 | orchestrator | skipping: no hosts matched 2026-03-28 04:20:38.287784 | orchestrator | 2026-03-28 04:20:38.287792 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 04:20:38.287799 | orchestrator | skipping: no hosts matched 2026-03-28 04:20:38.287807 | orchestrator | 2026-03-28 04:20:38.287814 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 04:20:38.287830 | orchestrator | skipping: no hosts matched 2026-03-28 04:20:38.287838 | orchestrator | 2026-03-28 04:20:38.287846 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 04:20:38.287853 | orchestrator | 2026-03-28 04:20:38.287861 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 04:20:38.287868 | orchestrator | Saturday 28 March 2026 04:20:37 +0000 (0:03:55.812) 0:04:01.799 ******** 2026-03-28 04:20:38.287876 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:20:38.287883 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:20:38.287891 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:20:38.287898 | orchestrator | 2026-03-28 04:20:38.287906 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 04:20:38.287913 | orchestrator | Saturday 28 March 2026 04:20:37 +0000 (0:00:00.319) 0:04:02.118 ******** 2026-03-28 04:20:38.287921 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:20:38.287929 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:20:38.287936 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:20:38.287944 | orchestrator | 2026-03-28 04:20:38.287951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:20:38.287960 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:20:38.287968 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 04:20:38.287993 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 04:20:38.288000 | orchestrator | 2026-03-28 04:20:38.288008 | orchestrator | 2026-03-28 04:20:38.288016 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:20:38.288024 | orchestrator | Saturday 28 March 2026 04:20:37 +0000 (0:00:00.438) 0:04:02.557 ******** 2026-03-28 04:20:38.288032 | orchestrator | =============================================================================== 2026-03-28 04:20:38.288040 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 235.81s 2026-03-28 04:20:38.288047 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.37s 2026-03-28 04:20:38.288055 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.82s 2026-03-28 04:20:38.288061 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-03-28 04:20:38.288068 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2026-03-28 04:20:38.288074 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2026-03-28 04:20:38.288081 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-03-28 04:20:38.288087 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-03-28 04:20:38.653751 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-28 04:20:38.664935 | orchestrator | + set -e 2026-03-28 04:20:38.665007 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:20:38.666237 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:20:38.666339 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:20:38.666356 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:20:38.666390 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:20:38.666709 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 04:20:38.669152 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:20:38.676083 | orchestrator | 2026-03-28 04:20:38.676136 | orchestrator | # OpenStack endpoints 2026-03-28 04:20:38.676145 | orchestrator | 2026-03-28 04:20:38.676151 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:20:38.676157 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:20:38.676163 | orchestrator | + export OS_CLOUD=admin 2026-03-28 04:20:38.676169 | orchestrator | + OS_CLOUD=admin 2026-03-28 04:20:38.676175 | orchestrator | + echo 2026-03-28 04:20:38.676230 | orchestrator | + echo '# OpenStack endpoints' 2026-03-28 04:20:38.676237 | orchestrator | + echo 2026-03-28 04:20:38.676242 | orchestrator | + openstack endpoint list 2026-03-28 04:20:42.028218 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 04:20:42.028349 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-28 04:20:42.029133 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 04:20:42.029155 | orchestrator | | 0e5ee2b7239d4f418f1c679ab6988873 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-28 04:20:42.029167 | orchestrator | | 28dcd731c8224af2970a757d4d7c9d6d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 04:20:42.029178 | orchestrator | | 2bce6bf7027e4a008cbc290ace44a9d5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-28 04:20:42.029189 | orchestrator | | 340a75be602c4f5b9c317b9f6c931a70 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-28 04:20:42.029200 | orchestrator | | 3d234a34fe7046edb5a00e6df9906ca0 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-28 04:20:42.029211 | orchestrator | | 43890fa29a22405ea42a4ccf2e1030de | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-28 04:20:42.029222 | orchestrator | | 450a81396d434a06ac766b6a835a4bce | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-28 04:20:42.029233 | orchestrator | | 543949d48aaf418194394805b8ae106b | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 04:20:42.029244 | orchestrator | | 5ac481b379de4e809ab49e9d46331826 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-28 04:20:42.029254 | orchestrator | | 631dbc7cf5e441e58ab61dc20d59fccf | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-28 04:20:42.029265 | orchestrator | | 7464d54dd91e418d901542c0dec3646d | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-28 04:20:42.029276 | orchestrator | | 7611517bf8f64b0191787d48a48e2da7 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-28 04:20:42.029287 | orchestrator | | 8346bd5bee5f43d6bc4c59b6640b9eda | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-28 04:20:42.029297 | orchestrator | | 84ec9391a71c42f8b6393cb16fbb98fe | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-28 04:20:42.029308 | orchestrator | | 870189711e51454484e276b1ca724b0c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-28 04:20:42.029319 | orchestrator | | 971850dcead74166a817a9591e07a405 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-28 04:20:42.029330 | orchestrator | | aae4faa5c90941ff8183064ebc5628d2 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-28 04:20:42.029399 | orchestrator | | b64cd74ae08c495a830d346e010aa713 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 04:20:42.029411 | orchestrator | | b794fe7aaca2480395bec0440398ddfe | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-28 04:20:42.029436 | orchestrator | | be7903572669410cbc7feb567667f542 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-28 04:20:42.029468 | orchestrator | | d096b8f91abb45df992ae2f1eb9c440d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-28 04:20:42.029479 | orchestrator | | dc1c55137161447e966a8c1e0e3c17e0 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-28 04:20:42.029490 | orchestrator | | e1b1fc8329734d958c2b78fde3e8b168 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-28 04:20:42.029501 | orchestrator | | e60fe37c7d4d4106b21162e275378d28 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-28 04:20:42.029512 | orchestrator | | e6629cf60f5647808d675c37158dffaa | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-28 04:20:42.029523 | orchestrator | | ed3a3d94bec5450f9c8c05f1d61ee731 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 04:20:42.029533 | orchestrator | | f0a95368f5cb406ea9b9fe4f2d2cd4c0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-28 04:20:42.029544 | orchestrator | | f14f3afd5d6145eeb97a5ea049c4306e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-28 04:20:42.029555 | orchestrator | | f3f703e70b3348798f14655067995005 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-28 04:20:42.029565 | orchestrator | | fa32fe3376944d26b540c4e19b8d35bb | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-28 04:20:42.029576 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 04:20:42.298781 | orchestrator | 2026-03-28 04:20:42.298892 | orchestrator | # Cinder 2026-03-28 04:20:42.298914 | orchestrator | 2026-03-28 04:20:42.298931 | orchestrator | + echo 2026-03-28 04:20:42.298949 | orchestrator | + echo '# Cinder' 2026-03-28 04:20:42.298966 | orchestrator | + echo 2026-03-28 04:20:42.298982 | orchestrator | + openstack volume service list 2026-03-28 04:20:45.159775 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:45.159855 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 04:20:45.159863 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:45.159869 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T04:20:44.000000 | 2026-03-28 04:20:45.159874 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T04:20:44.000000 | 2026-03-28 04:20:45.159879 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T04:20:44.000000 | 2026-03-28 04:20:45.159885 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-28T04:20:43.000000 | 2026-03-28 04:20:45.159908 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-28T04:20:42.000000 | 2026-03-28 04:20:45.159914 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-28T04:20:35.000000 | 2026-03-28 04:20:45.159919 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-28T04:20:39.000000 | 2026-03-28 04:20:45.159924 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-28T04:20:41.000000 | 2026-03-28 04:20:45.159929 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-28T04:20:41.000000 | 2026-03-28 04:20:45.159934 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:45.454102 | orchestrator | 2026-03-28 04:20:45.454203 | orchestrator | # Neutron 2026-03-28 04:20:45.454219 | orchestrator | 2026-03-28 04:20:45.454231 | orchestrator | + echo 2026-03-28 04:20:45.454243 | orchestrator | + echo '# Neutron' 2026-03-28 04:20:45.454257 | orchestrator | + echo 2026-03-28 04:20:45.454268 | orchestrator | + openstack network agent list 2026-03-28 04:20:48.096458 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 04:20:48.096552 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-28 04:20:48.096568 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 04:20:48.096600 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096614 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096627 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096640 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096653 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096665 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-28 04:20:48.096674 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 04:20:48.096681 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 04:20:48.096688 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 04:20:48.096696 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 04:20:48.416639 | orchestrator | + openstack network service provider list 2026-03-28 04:20:51.052578 | orchestrator | +---------------+------+---------+ 2026-03-28 04:20:51.052703 | orchestrator | | Service Type | Name | Default | 2026-03-28 04:20:51.052716 | orchestrator | +---------------+------+---------+ 2026-03-28 04:20:51.052724 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-28 04:20:51.052732 | orchestrator | +---------------+------+---------+ 2026-03-28 04:20:51.363315 | orchestrator | 2026-03-28 04:20:51.363461 | orchestrator | # Nova 2026-03-28 04:20:51.363469 | orchestrator | 2026-03-28 04:20:51.363473 | orchestrator | + echo 2026-03-28 04:20:51.363507 | orchestrator | + echo '# Nova' 2026-03-28 04:20:51.363513 | orchestrator | + echo 2026-03-28 04:20:51.363518 | orchestrator | + openstack compute service list 2026-03-28 04:20:54.806109 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:54.806253 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 04:20:54.806269 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:54.806281 | orchestrator | | 10d2abd1-5602-4f8b-a3c0-2a1d726b3aaa | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T04:20:45.000000 | 2026-03-28 04:20:54.806293 | orchestrator | | ab4c8e9e-541a-49a0-8afa-876bb7b711da | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T04:20:51.000000 | 2026-03-28 04:20:54.806303 | orchestrator | | 2d7c8e9b-13f7-4c5d-b5cf-fef7f745929c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T04:20:51.000000 | 2026-03-28 04:20:54.806314 | orchestrator | | f21748c4-9c68-42fa-ba41-9c982d62833c | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-28T04:20:47.000000 | 2026-03-28 04:20:54.806325 | orchestrator | | 4a3eb079-e48c-4109-b5d0-f016a260ccc2 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-28T04:20:49.000000 | 2026-03-28 04:20:54.806390 | orchestrator | | 935cea21-4777-4ef3-af0c-e98f29b87067 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-28T04:20:49.000000 | 2026-03-28 04:20:54.806420 | orchestrator | | 61f87e25-8020-4dd7-8716-6b94a327529e | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-28T04:20:46.000000 | 2026-03-28 04:20:54.806440 | orchestrator | | 525289d1-a003-449f-bed8-c14fcd1d2346 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-28T04:20:47.000000 | 2026-03-28 04:20:54.806458 | orchestrator | | b7173cb6-4420-4d94-bf8d-9b0070cc839b | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-28T04:20:47.000000 | 2026-03-28 04:20:54.806475 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 04:20:55.133935 | orchestrator | + openstack hypervisor list 2026-03-28 04:20:58.050621 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 04:20:58.050693 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-28 04:20:58.050699 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 04:20:58.050704 | orchestrator | | 7642fe0b-830f-4da7-bd5f-6147c2d18145 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-28 04:20:58.050712 | orchestrator | | 61ea9c54-27df-4285-946e-4aba4b6d3829 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-28 04:20:58.050716 | orchestrator | | 12c6d2ca-f34b-4fc9-9146-078e8ddf0557 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-28 04:20:58.050720 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 04:20:58.356290 | orchestrator | 2026-03-28 04:20:58.356469 | orchestrator | # Run OpenStack test play 2026-03-28 04:20:58.356499 | orchestrator | 2026-03-28 04:20:58.356518 | orchestrator | + echo 2026-03-28 04:20:58.356539 | orchestrator | + echo '# Run OpenStack test play' 2026-03-28 04:20:58.356559 | orchestrator | + echo 2026-03-28 04:20:58.356578 | orchestrator | + osism apply --environment openstack test 2026-03-28 04:21:00.419846 | orchestrator | 2026-03-28 04:21:00 | INFO  | Trying to run play test in environment openstack 2026-03-28 04:21:10.554475 | orchestrator | 2026-03-28 04:21:10 | INFO  | Task 250d1816-274b-41fd-b03f-e3b21c1bb096 (test) was prepared for execution. 2026-03-28 04:21:10.554557 | orchestrator | 2026-03-28 04:21:10 | INFO  | It takes a moment until task 250d1816-274b-41fd-b03f-e3b21c1bb096 (test) has been started and output is visible here. 2026-03-28 04:23:53.369160 | orchestrator | 2026-03-28 04:23:53.369250 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-28 04:23:53.369261 | orchestrator | 2026-03-28 04:23:53.369269 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-28 04:23:53.369276 | orchestrator | Saturday 28 March 2026 04:21:15 +0000 (0:00:00.098) 0:00:00.098 ******** 2026-03-28 04:23:53.369283 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369290 | orchestrator | 2026-03-28 04:23:53.369296 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-28 04:23:53.369303 | orchestrator | Saturday 28 March 2026 04:21:18 +0000 (0:00:03.849) 0:00:03.947 ******** 2026-03-28 04:23:53.369309 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369316 | orchestrator | 2026-03-28 04:23:53.369322 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-28 04:23:53.369328 | orchestrator | Saturday 28 March 2026 04:21:23 +0000 (0:00:04.406) 0:00:08.354 ******** 2026-03-28 04:23:53.369335 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369341 | orchestrator | 2026-03-28 04:23:53.369347 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-28 04:23:53.369354 | orchestrator | Saturday 28 March 2026 04:21:30 +0000 (0:00:07.058) 0:00:15.412 ******** 2026-03-28 04:23:53.369360 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369366 | orchestrator | 2026-03-28 04:23:53.369373 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-28 04:23:53.369379 | orchestrator | Saturday 28 March 2026 04:21:34 +0000 (0:00:04.146) 0:00:19.559 ******** 2026-03-28 04:23:53.369386 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369392 | orchestrator | 2026-03-28 04:23:53.369398 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-28 04:23:53.369404 | orchestrator | Saturday 28 March 2026 04:21:39 +0000 (0:00:04.559) 0:00:24.119 ******** 2026-03-28 04:23:53.369426 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-28 04:23:53.369434 | orchestrator | changed: [localhost] => (item=member) 2026-03-28 04:23:53.369441 | orchestrator | changed: [localhost] => (item=creator) 2026-03-28 04:23:53.369447 | orchestrator | 2026-03-28 04:23:53.369453 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-28 04:23:53.369460 | orchestrator | Saturday 28 March 2026 04:21:51 +0000 (0:00:12.095) 0:00:36.214 ******** 2026-03-28 04:23:53.369466 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369472 | orchestrator | 2026-03-28 04:23:53.369478 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-28 04:23:53.369485 | orchestrator | Saturday 28 March 2026 04:21:55 +0000 (0:00:04.327) 0:00:40.542 ******** 2026-03-28 04:23:53.369491 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369497 | orchestrator | 2026-03-28 04:23:53.369503 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-28 04:23:53.369509 | orchestrator | Saturday 28 March 2026 04:22:00 +0000 (0:00:05.049) 0:00:45.592 ******** 2026-03-28 04:23:53.369516 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369522 | orchestrator | 2026-03-28 04:23:53.369528 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-28 04:23:53.369535 | orchestrator | Saturday 28 March 2026 04:22:05 +0000 (0:00:04.465) 0:00:50.057 ******** 2026-03-28 04:23:53.369541 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369547 | orchestrator | 2026-03-28 04:23:53.369553 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-28 04:23:53.369560 | orchestrator | Saturday 28 March 2026 04:22:09 +0000 (0:00:04.288) 0:00:54.346 ******** 2026-03-28 04:23:53.369566 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369572 | orchestrator | 2026-03-28 04:23:53.369578 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-28 04:23:53.369585 | orchestrator | Saturday 28 March 2026 04:22:13 +0000 (0:00:04.428) 0:00:58.774 ******** 2026-03-28 04:23:53.369592 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369598 | orchestrator | 2026-03-28 04:23:53.369604 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-28 04:23:53.369628 | orchestrator | Saturday 28 March 2026 04:22:18 +0000 (0:00:04.837) 0:01:03.611 ******** 2026-03-28 04:23:53.369634 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369640 | orchestrator | 2026-03-28 04:23:53.369647 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-28 04:23:53.369653 | orchestrator | Saturday 28 March 2026 04:22:23 +0000 (0:00:04.995) 0:01:08.607 ******** 2026-03-28 04:23:53.369659 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369665 | orchestrator | 2026-03-28 04:23:53.369673 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-28 04:23:53.369680 | orchestrator | Saturday 28 March 2026 04:22:29 +0000 (0:00:05.949) 0:01:14.556 ******** 2026-03-28 04:23:53.369687 | orchestrator | changed: [localhost] 2026-03-28 04:23:53.369694 | orchestrator | 2026-03-28 04:23:53.369701 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-28 04:23:53.369709 | orchestrator | 2026-03-28 04:23:53.369716 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-28 04:23:53.369727 | orchestrator | Saturday 28 March 2026 04:22:41 +0000 (0:00:11.573) 0:01:26.130 ******** 2026-03-28 04:23:53.369735 | orchestrator | ok: [localhost] 2026-03-28 04:23:53.369743 | orchestrator | 2026-03-28 04:23:53.369750 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-28 04:23:53.369757 | orchestrator | Saturday 28 March 2026 04:22:45 +0000 (0:00:03.960) 0:01:30.091 ******** 2026-03-28 04:23:53.369772 | orchestrator | skipping: [localhost] 2026-03-28 04:23:53.369780 | orchestrator | 2026-03-28 04:23:53.369787 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-28 04:23:53.369795 | orchestrator | Saturday 28 March 2026 04:22:45 +0000 (0:00:00.056) 0:01:30.148 ******** 2026-03-28 04:23:53.369802 | orchestrator | skipping: [localhost] 2026-03-28 04:23:53.369809 | orchestrator | 2026-03-28 04:23:53.369816 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-28 04:23:53.369824 | orchestrator | Saturday 28 March 2026 04:22:45 +0000 (0:00:00.069) 0:01:30.218 ******** 2026-03-28 04:23:53.369831 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-28 04:23:53.369839 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-28 04:23:53.369858 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-28 04:23:53.369866 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-28 04:23:53.369873 | orchestrator | skipping: [localhost] => (item=test)  2026-03-28 04:23:53.369880 | orchestrator | skipping: [localhost] 2026-03-28 04:23:53.369887 | orchestrator | 2026-03-28 04:23:53.369895 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-28 04:23:53.369902 | orchestrator | Saturday 28 March 2026 04:22:45 +0000 (0:00:00.190) 0:01:30.409 ******** 2026-03-28 04:23:53.369909 | orchestrator | skipping: [localhost] 2026-03-28 04:23:53.369916 | orchestrator | 2026-03-28 04:23:53.369923 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-28 04:23:53.369931 | orchestrator | Saturday 28 March 2026 04:22:45 +0000 (0:00:00.163) 0:01:30.572 ******** 2026-03-28 04:23:53.369937 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 04:23:53.369944 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 04:23:53.369950 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 04:23:53.369956 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 04:23:53.369963 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 04:23:53.369969 | orchestrator | 2026-03-28 04:23:53.369975 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-28 04:23:53.369981 | orchestrator | Saturday 28 March 2026 04:22:50 +0000 (0:00:05.059) 0:01:35.632 ******** 2026-03-28 04:23:53.369988 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-28 04:23:53.369994 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-28 04:23:53.370006 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-28 04:23:53.370059 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-28 04:23:53.370145 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j240517107420.3777', 'results_file': '/ansible/.ansible_async/j240517107420.3777', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370157 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j244895964720.3802', 'results_file': '/ansible/.ansible_async/j244895964720.3802', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370164 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j873447040594.3827', 'results_file': '/ansible/.ansible_async/j873447040594.3827', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370170 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j738683169487.3852', 'results_file': '/ansible/.ansible_async/j738683169487.3852', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370176 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j413997848276.3877', 'results_file': '/ansible/.ansible_async/j413997848276.3877', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370182 | orchestrator | 2026-03-28 04:23:53.370189 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-28 04:23:53.370195 | orchestrator | Saturday 28 March 2026 04:23:37 +0000 (0:00:47.347) 0:02:22.980 ******** 2026-03-28 04:23:53.370201 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 04:23:53.370207 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 04:23:53.370214 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 04:23:53.370220 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 04:23:53.370226 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 04:23:53.370232 | orchestrator | 2026-03-28 04:23:53.370238 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-28 04:23:53.370244 | orchestrator | Saturday 28 March 2026 04:23:43 +0000 (0:00:05.228) 0:02:28.208 ******** 2026-03-28 04:23:53.370250 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-28 04:23:53.370263 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j991831247726.3981', 'results_file': '/ansible/.ansible_async/j991831247726.3981', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370269 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j916233970616.4006', 'results_file': '/ansible/.ansible_async/j916233970616.4006', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370276 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j706985011544.4031', 'results_file': '/ansible/.ansible_async/j706985011544.4031', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 04:23:53.370287 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j134456141162.4056', 'results_file': '/ansible/.ansible_async/j134456141162.4056', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762760 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j551740229370.4081', 'results_file': '/ansible/.ansible_async/j551740229370.4081', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762878 | orchestrator | 2026-03-28 04:24:35.762889 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-28 04:24:35.762897 | orchestrator | Saturday 28 March 2026 04:23:53 +0000 (0:00:10.141) 0:02:38.350 ******** 2026-03-28 04:24:35.762903 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 04:24:35.762912 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 04:24:35.762918 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 04:24:35.762925 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 04:24:35.762931 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 04:24:35.762937 | orchestrator | 2026-03-28 04:24:35.762943 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-28 04:24:35.762950 | orchestrator | Saturday 28 March 2026 04:23:59 +0000 (0:00:05.696) 0:02:44.047 ******** 2026-03-28 04:24:35.762956 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-28 04:24:35.762964 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j766985508661.4150', 'results_file': '/ansible/.ansible_async/j766985508661.4150', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762970 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j385885894605.4175', 'results_file': '/ansible/.ansible_async/j385885894605.4175', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762977 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j615584429761.4201', 'results_file': '/ansible/.ansible_async/j615584429761.4201', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762983 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j423100685350.4227', 'results_file': '/ansible/.ansible_async/j423100685350.4227', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762989 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j612929903009.4253', 'results_file': '/ansible/.ansible_async/j612929903009.4253', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 04:24:35.762996 | orchestrator | 2026-03-28 04:24:35.763002 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-28 04:24:35.763007 | orchestrator | Saturday 28 March 2026 04:24:09 +0000 (0:00:10.439) 0:02:54.486 ******** 2026-03-28 04:24:35.763064 | orchestrator | changed: [localhost] 2026-03-28 04:24:35.763070 | orchestrator | 2026-03-28 04:24:35.763076 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-28 04:24:35.763082 | orchestrator | Saturday 28 March 2026 04:24:16 +0000 (0:00:06.815) 0:03:01.302 ******** 2026-03-28 04:24:35.763088 | orchestrator | changed: [localhost] 2026-03-28 04:24:35.763094 | orchestrator | 2026-03-28 04:24:35.763099 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-28 04:24:35.763106 | orchestrator | Saturday 28 March 2026 04:24:29 +0000 (0:00:13.679) 0:03:14.981 ******** 2026-03-28 04:24:35.763112 | orchestrator | ok: [localhost] 2026-03-28 04:24:35.763118 | orchestrator | 2026-03-28 04:24:35.763124 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-28 04:24:35.763130 | orchestrator | Saturday 28 March 2026 04:24:35 +0000 (0:00:05.357) 0:03:20.339 ******** 2026-03-28 04:24:35.763136 | orchestrator | ok: [localhost] => { 2026-03-28 04:24:35.763142 | orchestrator |  "msg": "192.168.112.108" 2026-03-28 04:24:35.763149 | orchestrator | } 2026-03-28 04:24:35.763155 | orchestrator | 2026-03-28 04:24:35.763161 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:24:35.763169 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 04:24:35.763176 | orchestrator | 2026-03-28 04:24:35.763188 | orchestrator | 2026-03-28 04:24:35.763208 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:24:35.763214 | orchestrator | Saturday 28 March 2026 04:24:35 +0000 (0:00:00.044) 0:03:20.383 ******** 2026-03-28 04:24:35.763220 | orchestrator | =============================================================================== 2026-03-28 04:24:35.763226 | orchestrator | Wait for instance creation to complete --------------------------------- 47.35s 2026-03-28 04:24:35.763233 | orchestrator | Attach test volume ----------------------------------------------------- 13.68s 2026-03-28 04:24:35.763239 | orchestrator | Add member roles to user test ------------------------------------------ 12.10s 2026-03-28 04:24:35.763245 | orchestrator | Create test router ----------------------------------------------------- 11.57s 2026-03-28 04:24:35.763252 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.44s 2026-03-28 04:24:35.763258 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.14s 2026-03-28 04:24:35.763264 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.06s 2026-03-28 04:24:35.763285 | orchestrator | Create test volume ------------------------------------------------------ 6.82s 2026-03-28 04:24:35.763291 | orchestrator | Create test subnet ------------------------------------------------------ 5.95s 2026-03-28 04:24:35.763297 | orchestrator | Add tag to instances ---------------------------------------------------- 5.70s 2026-03-28 04:24:35.763303 | orchestrator | Create floating ip address ---------------------------------------------- 5.36s 2026-03-28 04:24:35.763308 | orchestrator | Add metadata to instances ----------------------------------------------- 5.23s 2026-03-28 04:24:35.763314 | orchestrator | Create test instances --------------------------------------------------- 5.06s 2026-03-28 04:24:35.763319 | orchestrator | Create ssh security group ----------------------------------------------- 5.05s 2026-03-28 04:24:35.763325 | orchestrator | Create test network ----------------------------------------------------- 5.00s 2026-03-28 04:24:35.763330 | orchestrator | Create test keypair ----------------------------------------------------- 4.84s 2026-03-28 04:24:35.763336 | orchestrator | Create test user -------------------------------------------------------- 4.56s 2026-03-28 04:24:35.763341 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.47s 2026-03-28 04:24:35.763346 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.43s 2026-03-28 04:24:35.763352 | orchestrator | Create test-admin user -------------------------------------------------- 4.41s 2026-03-28 04:24:36.149811 | orchestrator | + server_list 2026-03-28 04:24:36.149880 | orchestrator | + openstack --os-cloud test server list 2026-03-28 04:24:40.089492 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 04:24:40.089604 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-28 04:24:40.089620 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 04:24:40.089632 | orchestrator | | 79ec3db1-644a-4753-a13d-53cdf72080eb | test-4 | ACTIVE | test=192.168.112.106, 192.168.200.25 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 04:24:40.089642 | orchestrator | | 9d846f4a-77f6-4871-b369-1e61dc2f1578 | test-2 | ACTIVE | test=192.168.112.146, 192.168.200.241 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 04:24:40.089654 | orchestrator | | a43f32f0-fbb0-4415-8045-1aa6ae966623 | test-3 | ACTIVE | test=192.168.112.137, 192.168.200.248 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 04:24:40.089664 | orchestrator | | ea5b0982-bd3f-4a18-8567-cffdbfd611ae | test-1 | ACTIVE | test=192.168.112.173, 192.168.200.239 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 04:24:40.089675 | orchestrator | | 088f1b0f-6e11-457c-a806-2a718a1189bc | test | ACTIVE | test=192.168.112.108, 192.168.200.159 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 04:24:40.089686 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 04:24:40.395416 | orchestrator | + openstack --os-cloud test server show test 2026-03-28 04:24:43.727072 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:43.727187 | orchestrator | | Field | Value | 2026-03-28 04:24:43.727210 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:43.727224 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 04:24:43.727236 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 04:24:43.727247 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 04:24:43.727258 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-28 04:24:43.727270 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 04:24:43.727281 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 04:24:43.727329 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 04:24:43.727343 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 04:24:43.727354 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 04:24:43.727366 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 04:24:43.727377 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 04:24:43.727388 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 04:24:43.727399 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 04:24:43.727411 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 04:24:43.727422 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 04:24:43.727433 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T04:23:21.000000 | 2026-03-28 04:24:43.727465 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 04:24:43.727512 | orchestrator | | accessIPv4 | | 2026-03-28 04:24:43.727525 | orchestrator | | accessIPv6 | | 2026-03-28 04:24:43.727542 | orchestrator | | addresses | test=192.168.112.108, 192.168.200.159 | 2026-03-28 04:24:43.727555 | orchestrator | | config_drive | | 2026-03-28 04:24:43.727569 | orchestrator | | created | 2026-03-28T04:22:54Z | 2026-03-28 04:24:43.727582 | orchestrator | | description | None | 2026-03-28 04:24:43.727595 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 04:24:43.727608 | orchestrator | | hostId | 24dc6707a9688401b7504573cea304caa9a03c886ccc3b680ce4f46e | 2026-03-28 04:24:43.727629 | orchestrator | | host_status | None | 2026-03-28 04:24:43.727650 | orchestrator | | id | 088f1b0f-6e11-457c-a806-2a718a1189bc | 2026-03-28 04:24:43.727663 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 04:24:43.727681 | orchestrator | | key_name | test | 2026-03-28 04:24:43.727694 | orchestrator | | locked | False | 2026-03-28 04:24:43.727707 | orchestrator | | locked_reason | None | 2026-03-28 04:24:43.727720 | orchestrator | | name | test | 2026-03-28 04:24:43.727733 | orchestrator | | pinned_availability_zone | None | 2026-03-28 04:24:43.727746 | orchestrator | | progress | 0 | 2026-03-28 04:24:43.727765 | orchestrator | | project_id | 1ab0db955dc440d5bc261f7c0b60f525 | 2026-03-28 04:24:43.727778 | orchestrator | | properties | hostname='test' | 2026-03-28 04:24:43.727798 | orchestrator | | security_groups | name='icmp' | 2026-03-28 04:24:43.727812 | orchestrator | | | name='ssh' | 2026-03-28 04:24:43.727830 | orchestrator | | server_groups | None | 2026-03-28 04:24:43.727844 | orchestrator | | status | ACTIVE | 2026-03-28 04:24:43.727856 | orchestrator | | tags | test | 2026-03-28 04:24:43.727869 | orchestrator | | trusted_image_certificates | None | 2026-03-28 04:24:43.727882 | orchestrator | | updated | 2026-03-28T04:23:44Z | 2026-03-28 04:24:43.727907 | orchestrator | | user_id | 1e476341894744ccafa781bf46821df7 | 2026-03-28 04:24:43.727919 | orchestrator | | volumes_attached | delete_on_termination='True', id='a8b572f9-6346-4159-b072-156ae758b8ab' | 2026-03-28 04:24:43.727930 | orchestrator | | | delete_on_termination='False', id='d793d465-e3b9-44c1-9c41-39987f349441' | 2026-03-28 04:24:43.730866 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:44.040535 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-28 04:24:47.279274 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:47.279381 | orchestrator | | Field | Value | 2026-03-28 04:24:47.279391 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:47.279395 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 04:24:47.279400 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 04:24:47.279419 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 04:24:47.279423 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-28 04:24:47.279427 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 04:24:47.279431 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 04:24:47.279446 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 04:24:47.279450 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 04:24:47.279458 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 04:24:47.279462 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 04:24:47.279466 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 04:24:47.279470 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 04:24:47.279478 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 04:24:47.279482 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 04:24:47.279486 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 04:24:47.279490 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T04:23:21.000000 | 2026-03-28 04:24:47.279497 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 04:24:47.279501 | orchestrator | | accessIPv4 | | 2026-03-28 04:24:47.279506 | orchestrator | | accessIPv6 | | 2026-03-28 04:24:47.279510 | orchestrator | | addresses | test=192.168.112.173, 192.168.200.239 | 2026-03-28 04:24:47.279514 | orchestrator | | config_drive | | 2026-03-28 04:24:47.279521 | orchestrator | | created | 2026-03-28T04:22:56Z | 2026-03-28 04:24:47.279526 | orchestrator | | description | None | 2026-03-28 04:24:47.279533 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 04:24:47.279539 | orchestrator | | hostId | 24dc6707a9688401b7504573cea304caa9a03c886ccc3b680ce4f46e | 2026-03-28 04:24:47.279545 | orchestrator | | host_status | None | 2026-03-28 04:24:47.279555 | orchestrator | | id | ea5b0982-bd3f-4a18-8567-cffdbfd611ae | 2026-03-28 04:24:47.279567 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 04:24:47.279577 | orchestrator | | key_name | test | 2026-03-28 04:24:47.279584 | orchestrator | | locked | False | 2026-03-28 04:24:47.279596 | orchestrator | | locked_reason | None | 2026-03-28 04:24:47.279600 | orchestrator | | name | test-1 | 2026-03-28 04:24:47.279604 | orchestrator | | pinned_availability_zone | None | 2026-03-28 04:24:47.279608 | orchestrator | | progress | 0 | 2026-03-28 04:24:47.279612 | orchestrator | | project_id | 1ab0db955dc440d5bc261f7c0b60f525 | 2026-03-28 04:24:47.279615 | orchestrator | | properties | hostname='test-1' | 2026-03-28 04:24:47.279624 | orchestrator | | security_groups | name='icmp' | 2026-03-28 04:24:47.279628 | orchestrator | | | name='ssh' | 2026-03-28 04:24:47.279635 | orchestrator | | server_groups | None | 2026-03-28 04:24:47.279643 | orchestrator | | status | ACTIVE | 2026-03-28 04:24:47.279647 | orchestrator | | tags | test | 2026-03-28 04:24:47.279651 | orchestrator | | trusted_image_certificates | None | 2026-03-28 04:24:47.279655 | orchestrator | | updated | 2026-03-28T04:23:44Z | 2026-03-28 04:24:47.279658 | orchestrator | | user_id | 1e476341894744ccafa781bf46821df7 | 2026-03-28 04:24:47.279662 | orchestrator | | volumes_attached | delete_on_termination='True', id='d59cb95d-f496-47e9-8f69-bb3c8eaca97b' | 2026-03-28 04:24:47.284259 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:47.613387 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-28 04:24:50.664692 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:50.664846 | orchestrator | | Field | Value | 2026-03-28 04:24:50.664875 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:50.664912 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 04:24:50.664924 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 04:24:50.664935 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 04:24:50.664946 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-28 04:24:50.664957 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 04:24:50.664968 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 04:24:50.665111 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 04:24:50.665128 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 04:24:50.665147 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 04:24:50.665168 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 04:24:50.665182 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 04:24:50.665196 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 04:24:50.665209 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 04:24:50.665222 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 04:24:50.665236 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 04:24:50.665248 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T04:23:22.000000 | 2026-03-28 04:24:50.665267 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 04:24:50.665280 | orchestrator | | accessIPv4 | | 2026-03-28 04:24:50.665301 | orchestrator | | accessIPv6 | | 2026-03-28 04:24:50.665314 | orchestrator | | addresses | test=192.168.112.146, 192.168.200.241 | 2026-03-28 04:24:50.665327 | orchestrator | | config_drive | | 2026-03-28 04:24:50.665339 | orchestrator | | created | 2026-03-28T04:22:57Z | 2026-03-28 04:24:50.665351 | orchestrator | | description | None | 2026-03-28 04:24:50.665363 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 04:24:50.665375 | orchestrator | | hostId | e5ffd99a9b5b9f6dfccd908a784f3eb7a19085327fe5bcbc8ed6fa6e | 2026-03-28 04:24:50.665386 | orchestrator | | host_status | None | 2026-03-28 04:24:50.665407 | orchestrator | | id | 9d846f4a-77f6-4871-b369-1e61dc2f1578 | 2026-03-28 04:24:50.665424 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 04:24:50.665441 | orchestrator | | key_name | test | 2026-03-28 04:24:50.665453 | orchestrator | | locked | False | 2026-03-28 04:24:50.665466 | orchestrator | | locked_reason | None | 2026-03-28 04:24:50.665478 | orchestrator | | name | test-2 | 2026-03-28 04:24:50.665490 | orchestrator | | pinned_availability_zone | None | 2026-03-28 04:24:50.665502 | orchestrator | | progress | 0 | 2026-03-28 04:24:50.665514 | orchestrator | | project_id | 1ab0db955dc440d5bc261f7c0b60f525 | 2026-03-28 04:24:50.665526 | orchestrator | | properties | hostname='test-2' | 2026-03-28 04:24:50.665544 | orchestrator | | security_groups | name='icmp' | 2026-03-28 04:24:50.665563 | orchestrator | | | name='ssh' | 2026-03-28 04:24:50.665576 | orchestrator | | server_groups | None | 2026-03-28 04:24:50.665587 | orchestrator | | status | ACTIVE | 2026-03-28 04:24:50.665599 | orchestrator | | tags | test | 2026-03-28 04:24:50.665611 | orchestrator | | trusted_image_certificates | None | 2026-03-28 04:24:50.665623 | orchestrator | | updated | 2026-03-28T04:23:45Z | 2026-03-28 04:24:50.665635 | orchestrator | | user_id | 1e476341894744ccafa781bf46821df7 | 2026-03-28 04:24:50.665646 | orchestrator | | volumes_attached | delete_on_termination='True', id='d36845ee-eaae-442f-884f-5305b01d491e' | 2026-03-28 04:24:50.669397 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:50.998117 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-28 04:24:54.066114 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:54.066191 | orchestrator | | Field | Value | 2026-03-28 04:24:54.066199 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:54.066204 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 04:24:54.066209 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 04:24:54.066214 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 04:24:54.066218 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-28 04:24:54.066223 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 04:24:54.066229 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 04:24:54.066263 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 04:24:54.066269 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 04:24:54.066274 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 04:24:54.066279 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 04:24:54.066283 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 04:24:54.066288 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 04:24:54.066293 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 04:24:54.066298 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 04:24:54.066303 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 04:24:54.066311 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T04:23:22.000000 | 2026-03-28 04:24:54.066321 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 04:24:54.066326 | orchestrator | | accessIPv4 | | 2026-03-28 04:24:54.066331 | orchestrator | | accessIPv6 | | 2026-03-28 04:24:54.066336 | orchestrator | | addresses | test=192.168.112.137, 192.168.200.248 | 2026-03-28 04:24:54.066341 | orchestrator | | config_drive | | 2026-03-28 04:24:54.066345 | orchestrator | | created | 2026-03-28T04:22:57Z | 2026-03-28 04:24:54.066350 | orchestrator | | description | None | 2026-03-28 04:24:54.066355 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 04:24:54.066363 | orchestrator | | hostId | e5ffd99a9b5b9f6dfccd908a784f3eb7a19085327fe5bcbc8ed6fa6e | 2026-03-28 04:24:54.066368 | orchestrator | | host_status | None | 2026-03-28 04:24:54.066380 | orchestrator | | id | a43f32f0-fbb0-4415-8045-1aa6ae966623 | 2026-03-28 04:24:54.066385 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 04:24:54.066389 | orchestrator | | key_name | test | 2026-03-28 04:24:54.066394 | orchestrator | | locked | False | 2026-03-28 04:24:54.066399 | orchestrator | | locked_reason | None | 2026-03-28 04:24:54.066404 | orchestrator | | name | test-3 | 2026-03-28 04:24:54.066409 | orchestrator | | pinned_availability_zone | None | 2026-03-28 04:24:54.066413 | orchestrator | | progress | 0 | 2026-03-28 04:24:54.066421 | orchestrator | | project_id | 1ab0db955dc440d5bc261f7c0b60f525 | 2026-03-28 04:24:54.066426 | orchestrator | | properties | hostname='test-3' | 2026-03-28 04:24:54.066437 | orchestrator | | security_groups | name='icmp' | 2026-03-28 04:24:54.066442 | orchestrator | | | name='ssh' | 2026-03-28 04:24:54.066447 | orchestrator | | server_groups | None | 2026-03-28 04:24:54.066452 | orchestrator | | status | ACTIVE | 2026-03-28 04:24:54.066457 | orchestrator | | tags | test | 2026-03-28 04:24:54.066461 | orchestrator | | trusted_image_certificates | None | 2026-03-28 04:24:54.066466 | orchestrator | | updated | 2026-03-28T04:23:46Z | 2026-03-28 04:24:54.066474 | orchestrator | | user_id | 1e476341894744ccafa781bf46821df7 | 2026-03-28 04:24:54.066479 | orchestrator | | volumes_attached | delete_on_termination='True', id='c9db37b4-a2df-443b-b073-29ec40db3a72' | 2026-03-28 04:24:54.072190 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:54.401193 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-28 04:24:57.564105 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:57.564189 | orchestrator | | Field | Value | 2026-03-28 04:24:57.564200 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:57.564208 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 04:24:57.564215 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 04:24:57.564222 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 04:24:57.564248 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-28 04:24:57.564255 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 04:24:57.564261 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 04:24:57.564292 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 04:24:57.564299 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 04:24:57.564305 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 04:24:57.564311 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 04:24:57.564317 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 04:24:57.564323 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 04:24:57.564333 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 04:24:57.564339 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 04:24:57.564345 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 04:24:57.564351 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T04:23:21.000000 | 2026-03-28 04:24:57.564362 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 04:24:57.564369 | orchestrator | | accessIPv4 | | 2026-03-28 04:24:57.564375 | orchestrator | | accessIPv6 | | 2026-03-28 04:24:57.564381 | orchestrator | | addresses | test=192.168.112.106, 192.168.200.25 | 2026-03-28 04:24:57.564387 | orchestrator | | config_drive | | 2026-03-28 04:24:57.564398 | orchestrator | | created | 2026-03-28T04:22:58Z | 2026-03-28 04:24:57.564404 | orchestrator | | description | None | 2026-03-28 04:24:57.564457 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 04:24:57.564467 | orchestrator | | hostId | 24dc6707a9688401b7504573cea304caa9a03c886ccc3b680ce4f46e | 2026-03-28 04:24:57.564473 | orchestrator | | host_status | None | 2026-03-28 04:24:57.564490 | orchestrator | | id | 79ec3db1-644a-4753-a13d-53cdf72080eb | 2026-03-28 04:24:57.564501 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 04:24:57.564516 | orchestrator | | key_name | test | 2026-03-28 04:24:57.564527 | orchestrator | | locked | False | 2026-03-28 04:24:57.564537 | orchestrator | | locked_reason | None | 2026-03-28 04:24:57.564554 | orchestrator | | name | test-4 | 2026-03-28 04:24:57.564565 | orchestrator | | pinned_availability_zone | None | 2026-03-28 04:24:57.564574 | orchestrator | | progress | 0 | 2026-03-28 04:24:57.564584 | orchestrator | | project_id | 1ab0db955dc440d5bc261f7c0b60f525 | 2026-03-28 04:24:57.564593 | orchestrator | | properties | hostname='test-4' | 2026-03-28 04:24:57.564615 | orchestrator | | security_groups | name='icmp' | 2026-03-28 04:24:57.564627 | orchestrator | | | name='ssh' | 2026-03-28 04:24:57.564635 | orchestrator | | server_groups | None | 2026-03-28 04:24:57.564641 | orchestrator | | status | ACTIVE | 2026-03-28 04:24:57.564652 | orchestrator | | tags | test | 2026-03-28 04:24:57.564658 | orchestrator | | trusted_image_certificates | None | 2026-03-28 04:24:57.564664 | orchestrator | | updated | 2026-03-28T04:23:47Z | 2026-03-28 04:24:57.564670 | orchestrator | | user_id | 1e476341894744ccafa781bf46821df7 | 2026-03-28 04:24:57.564676 | orchestrator | | volumes_attached | delete_on_termination='True', id='2834006b-bc9a-4510-aa4d-b830ac69c6b9' | 2026-03-28 04:24:57.569622 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 04:24:57.931479 | orchestrator | + server_ping 2026-03-28 04:24:57.933226 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 04:24:57.933287 | orchestrator | ++ tr -d '\r' 2026-03-28 04:25:00.779747 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 04:25:00.779856 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 04:25:00.794276 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 04:25:00.794342 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=7.97 ms 2026-03-28 04:25:01.790609 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.55 ms 2026-03-28 04:25:02.792379 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.90 ms 2026-03-28 04:25:02.792487 | orchestrator | 2026-03-28 04:25:02.792510 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 04:25:02.792531 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 04:25:02.792555 | orchestrator | rtt min/avg/max/mdev = 1.895/4.139/7.970/2.721 ms 2026-03-28 04:25:02.792583 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 04:25:02.792637 | orchestrator | + ping -c3 192.168.112.146 2026-03-28 04:25:02.803561 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-03-28 04:25:02.803657 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=6.01 ms 2026-03-28 04:25:03.801010 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.20 ms 2026-03-28 04:25:04.802850 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.99 ms 2026-03-28 04:25:04.802942 | orchestrator | 2026-03-28 04:25:04.802958 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-03-28 04:25:04.803019 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 04:25:04.803033 | orchestrator | rtt min/avg/max/mdev = 1.985/3.397/6.007/1.847 ms 2026-03-28 04:25:04.803542 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 04:25:04.803570 | orchestrator | + ping -c3 192.168.112.173 2026-03-28 04:25:04.812811 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-03-28 04:25:04.812885 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=6.03 ms 2026-03-28 04:25:05.811361 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.70 ms 2026-03-28 04:25:06.813343 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=1.90 ms 2026-03-28 04:25:06.813544 | orchestrator | 2026-03-28 04:25:06.813564 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-03-28 04:25:06.813574 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 04:25:06.813583 | orchestrator | rtt min/avg/max/mdev = 1.898/3.543/6.031/1.789 ms 2026-03-28 04:25:06.813603 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 04:25:06.813613 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 04:25:06.824468 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 04:25:06.824547 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.33 ms 2026-03-28 04:25:07.822567 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.67 ms 2026-03-28 04:25:08.824310 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.10 ms 2026-03-28 04:25:08.824382 | orchestrator | 2026-03-28 04:25:08.824393 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 04:25:08.824402 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 04:25:08.824411 | orchestrator | rtt min/avg/max/mdev = 2.100/3.699/6.330/1.874 ms 2026-03-28 04:25:08.824419 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 04:25:08.824427 | orchestrator | + ping -c3 192.168.112.137 2026-03-28 04:25:08.834244 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-03-28 04:25:08.834320 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.14 ms 2026-03-28 04:25:09.832281 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.65 ms 2026-03-28 04:25:10.834658 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=2.55 ms 2026-03-28 04:25:10.836005 | orchestrator | 2026-03-28 04:25:10.836073 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-03-28 04:25:10.836089 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 04:25:10.836102 | orchestrator | rtt min/avg/max/mdev = 2.554/3.779/6.135/1.666 ms 2026-03-28 04:25:10.836131 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-28 04:25:11.118067 | orchestrator | ok: Runtime: 0:11:52.982086 2026-03-28 04:25:11.168624 | 2026-03-28 04:25:11.168760 | TASK [Run tempest] 2026-03-28 04:25:11.704708 | orchestrator | skipping: Conditional result was False 2026-03-28 04:25:11.723062 | 2026-03-28 04:25:11.723304 | TASK [Check prometheus alert status] 2026-03-28 04:25:12.260642 | orchestrator | skipping: Conditional result was False 2026-03-28 04:25:12.273076 | 2026-03-28 04:25:12.273239 | PLAY [Upgrade testbed] 2026-03-28 04:25:12.284312 | 2026-03-28 04:25:12.284436 | TASK [Print next ceph version] 2026-03-28 04:25:12.356451 | orchestrator | ok 2026-03-28 04:25:12.363860 | 2026-03-28 04:25:12.363981 | TASK [Print next openstack version] 2026-03-28 04:25:12.443192 | orchestrator | ok 2026-03-28 04:25:12.458693 | 2026-03-28 04:25:12.458952 | TASK [Print next manager version] 2026-03-28 04:25:12.529452 | orchestrator | ok 2026-03-28 04:25:12.540222 | 2026-03-28 04:25:12.540365 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 04:25:12.599820 | orchestrator | ok 2026-03-28 04:25:12.611940 | 2026-03-28 04:25:12.612192 | TASK [Set cloud fact (local deployment)] 2026-03-28 04:25:12.637204 | orchestrator | skipping: Conditional result was False 2026-03-28 04:25:12.651431 | 2026-03-28 04:25:12.651574 | TASK [Fetch manager address] 2026-03-28 04:25:12.924882 | orchestrator | ok 2026-03-28 04:25:12.935630 | 2026-03-28 04:25:12.935780 | TASK [Set manager_host address] 2026-03-28 04:25:13.015324 | orchestrator | ok 2026-03-28 04:25:13.025966 | 2026-03-28 04:25:13.026086 | TASK [Run upgrade] 2026-03-28 04:25:13.718554 | orchestrator | + set -e 2026-03-28 04:25:13.718709 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:25:13.718725 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:25:13.718739 | orchestrator | + CEPH_VERSION=reef 2026-03-28 04:25:13.718747 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-28 04:25:13.718755 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-28 04:25:13.718769 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-28 04:25:13.729419 | orchestrator | + set -e 2026-03-28 04:25:13.729519 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:25:13.729537 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:25:13.729556 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:25:13.729568 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:25:13.729589 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:25:13.730675 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-28 04:25:13.772370 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-28 04:25:13.773869 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-28 04:25:13.808205 | orchestrator | 2026-03-28 04:25:13.808286 | orchestrator | # UPGRADE MANAGER 2026-03-28 04:25:13.808298 | orchestrator | 2026-03-28 04:25:13.808305 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-28 04:25:13.808312 | orchestrator | + echo 2026-03-28 04:25:13.808319 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-28 04:25:13.808328 | orchestrator | + echo 2026-03-28 04:25:13.808335 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:25:13.808342 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:25:13.808349 | orchestrator | + CEPH_VERSION=reef 2026-03-28 04:25:13.808356 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-28 04:25:13.808362 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-28 04:25:13.808369 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-28 04:25:13.817014 | orchestrator | + set -e 2026-03-28 04:25:13.817119 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-28 04:25:13.817133 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:25:13.824773 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-28 04:25:13.824847 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:25:13.830499 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:25:13.833708 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-28 04:25:13.843933 | orchestrator | /opt/configuration ~ 2026-03-28 04:25:13.844003 | orchestrator | + set -e 2026-03-28 04:25:13.844014 | orchestrator | + pushd /opt/configuration 2026-03-28 04:25:13.844023 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 04:25:13.844033 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 04:25:13.845131 | orchestrator | ++ deactivate nondestructive 2026-03-28 04:25:13.845163 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:13.845168 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:13.845173 | orchestrator | ++ hash -r 2026-03-28 04:25:13.845210 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:13.845217 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 04:25:13.845221 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 04:25:13.845521 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 04:25:13.845619 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 04:25:13.845627 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 04:25:13.845631 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 04:25:13.845637 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 04:25:13.845642 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:13.845648 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:13.845659 | orchestrator | ++ export PATH 2026-03-28 04:25:13.845664 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:13.845668 | orchestrator | ++ '[' -z '' ']' 2026-03-28 04:25:13.845673 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 04:25:13.845678 | orchestrator | ++ PS1='(venv) ' 2026-03-28 04:25:13.845683 | orchestrator | ++ export PS1 2026-03-28 04:25:13.845689 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 04:25:13.845694 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 04:25:13.845699 | orchestrator | ++ hash -r 2026-03-28 04:25:13.845707 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-28 04:25:15.314372 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-28 04:25:15.316462 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-28 04:25:15.318885 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-28 04:25:15.321062 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-28 04:25:15.323332 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-28 04:25:15.335725 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-28 04:25:15.337342 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-28 04:25:15.338509 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-28 04:25:15.340044 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-28 04:25:15.407055 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-28 04:25:15.407897 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-28 04:25:15.409590 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-28 04:25:15.410894 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-28 04:25:15.415208 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-28 04:25:15.665182 | orchestrator | ++ which gilt 2026-03-28 04:25:15.668442 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-28 04:25:15.668538 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-28 04:25:15.935584 | orchestrator | osism.cfg-generics: 2026-03-28 04:25:16.059867 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-28 04:25:16.060548 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-28 04:25:16.062382 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-28 04:25:16.062419 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-28 04:25:17.175192 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-28 04:25:17.185338 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-28 04:25:17.556697 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-28 04:25:17.640321 | orchestrator | ~ 2026-03-28 04:25:17.640424 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 04:25:17.640436 | orchestrator | + deactivate 2026-03-28 04:25:17.640444 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 04:25:17.640453 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:17.640460 | orchestrator | + export PATH 2026-03-28 04:25:17.640467 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 04:25:17.640474 | orchestrator | + '[' -n '' ']' 2026-03-28 04:25:17.640481 | orchestrator | + hash -r 2026-03-28 04:25:17.640488 | orchestrator | + '[' -n '' ']' 2026-03-28 04:25:17.640495 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 04:25:17.640501 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 04:25:17.640508 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 04:25:17.640515 | orchestrator | + unset -f deactivate 2026-03-28 04:25:17.640522 | orchestrator | + popd 2026-03-28 04:25:17.642348 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-28 04:25:17.642449 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-28 04:25:17.647566 | orchestrator | + set -e 2026-03-28 04:25:17.647618 | orchestrator | + NAMESPACE=kolla/release 2026-03-28 04:25:17.647632 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 04:25:17.655318 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-28 04:25:17.664716 | orchestrator | /opt/configuration ~ 2026-03-28 04:25:17.664782 | orchestrator | + set -e 2026-03-28 04:25:17.664788 | orchestrator | + pushd /opt/configuration 2026-03-28 04:25:17.664793 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 04:25:17.664798 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 04:25:17.665288 | orchestrator | ++ deactivate nondestructive 2026-03-28 04:25:17.665299 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:17.665332 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:17.665339 | orchestrator | ++ hash -r 2026-03-28 04:25:17.665363 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:17.665370 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 04:25:17.665377 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 04:25:17.665384 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 04:25:17.665417 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 04:25:17.665425 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 04:25:17.665432 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 04:25:17.665507 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 04:25:17.665517 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:17.665528 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:17.665535 | orchestrator | ++ export PATH 2026-03-28 04:25:17.665542 | orchestrator | ++ '[' -n '' ']' 2026-03-28 04:25:17.665551 | orchestrator | ++ '[' -z '' ']' 2026-03-28 04:25:17.665558 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 04:25:17.665566 | orchestrator | ++ PS1='(venv) ' 2026-03-28 04:25:17.665573 | orchestrator | ++ export PS1 2026-03-28 04:25:17.665580 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 04:25:17.665587 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 04:25:17.665593 | orchestrator | ++ hash -r 2026-03-28 04:25:17.665600 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-28 04:25:18.233313 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-28 04:25:18.233487 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-28 04:25:18.234765 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-28 04:25:18.236403 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-28 04:25:18.237503 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-28 04:25:18.247703 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-28 04:25:18.249042 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-28 04:25:18.250184 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-28 04:25:18.251569 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-28 04:25:18.293701 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-28 04:25:18.296303 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-28 04:25:18.298315 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-28 04:25:18.299887 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-28 04:25:18.305144 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-28 04:25:18.578347 | orchestrator | ++ which gilt 2026-03-28 04:25:18.580238 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-28 04:25:18.580303 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-28 04:25:18.755064 | orchestrator | osism.cfg-generics: 2026-03-28 04:25:18.898306 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-28 04:25:18.898382 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-28 04:25:18.898390 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-28 04:25:18.898407 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-28 04:25:19.532496 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-28 04:25:19.544442 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-28 04:25:19.898390 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-28 04:25:19.953212 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 04:25:19.953314 | orchestrator | + deactivate 2026-03-28 04:25:19.953350 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 04:25:19.953362 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 04:25:19.953370 | orchestrator | + export PATH 2026-03-28 04:25:19.953379 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 04:25:19.953388 | orchestrator | + '[' -n '' ']' 2026-03-28 04:25:19.953396 | orchestrator | + hash -r 2026-03-28 04:25:19.953404 | orchestrator | + '[' -n '' ']' 2026-03-28 04:25:19.953412 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 04:25:19.953421 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 04:25:19.953429 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 04:25:19.953437 | orchestrator | + unset -f deactivate 2026-03-28 04:25:19.953456 | orchestrator | ~ 2026-03-28 04:25:19.953465 | orchestrator | + popd 2026-03-28 04:25:19.955785 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-28 04:25:20.024158 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 04:25:20.024907 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-28 04:25:20.123566 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 04:25:20.123677 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-28 04:25:20.128791 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-28 04:25:20.133050 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-28 04:25:20.178555 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-28 04:25:20.178666 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-28 04:25:20.257470 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-28 04:25:20.257601 | orchestrator | ++ echo true 2026-03-28 04:25:20.259164 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-28 04:25:20.261828 | orchestrator | +++ semver 2024.2 2024.2 2026-03-28 04:25:20.336208 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-28 04:25:20.336982 | orchestrator | +++ semver 2024.2 2025.1 2026-03-28 04:25:20.389617 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-28 04:25:20.389713 | orchestrator | ++ echo false 2026-03-28 04:25:20.391118 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-28 04:25:20.391173 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 04:25:20.391182 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-28 04:25:20.391187 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-28 04:25:20.391195 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:25:20.397332 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-28 04:25:20.397382 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-28 04:25:20.419292 | orchestrator | export RABBITMQ3TO4=true 2026-03-28 04:25:20.424391 | orchestrator | + osism update manager 2026-03-28 04:25:26.549545 | orchestrator | Collecting uv 2026-03-28 04:25:26.638757 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-28 04:25:26.661074 | orchestrator | Downloading uv-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-03-28 04:25:27.560415 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 26.9 MB/s eta 0:00:00 2026-03-28 04:25:27.626250 | orchestrator | Installing collected packages: uv 2026-03-28 04:25:28.168353 | orchestrator | Successfully installed uv-0.11.2 2026-03-28 04:25:28.927632 | orchestrator | Resolved 11 packages in 373ms 2026-03-28 04:25:28.951722 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-28 04:25:28.963911 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-28 04:25:28.964083 | orchestrator | Downloading ansible (54.5MiB) 2026-03-28 04:25:28.964162 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-28 04:25:29.330920 | orchestrator | Downloaded netaddr 2026-03-28 04:25:29.429588 | orchestrator | Downloaded cryptography 2026-03-28 04:25:29.535725 | orchestrator | Downloaded ansible-core 2026-03-28 04:25:37.127364 | orchestrator | Downloaded ansible 2026-03-28 04:25:37.128394 | orchestrator | Prepared 11 packages in 8.20s 2026-03-28 04:25:37.764716 | orchestrator | Installed 11 packages in 634ms 2026-03-28 04:25:37.764800 | orchestrator | + ansible==11.11.0 2026-03-28 04:25:37.764811 | orchestrator | + ansible-core==2.18.15 2026-03-28 04:25:37.764820 | orchestrator | + cffi==2.0.0 2026-03-28 04:25:37.764829 | orchestrator | + cryptography==46.0.6 2026-03-28 04:25:37.764838 | orchestrator | + jinja2==3.1.6 2026-03-28 04:25:37.764846 | orchestrator | + markupsafe==3.0.3 2026-03-28 04:25:37.764854 | orchestrator | + netaddr==1.3.0 2026-03-28 04:25:37.764862 | orchestrator | + packaging==26.0 2026-03-28 04:25:37.764870 | orchestrator | + pycparser==3.0 2026-03-28 04:25:37.764878 | orchestrator | + pyyaml==6.0.3 2026-03-28 04:25:37.764887 | orchestrator | + resolvelib==1.0.1 2026-03-28 04:25:38.991402 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-205204kmxocwg7/tmphkbumh23/ansible-collection-serviceso93w618m'... 2026-03-28 04:25:40.759413 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-28 04:25:40.759482 | orchestrator | Already on 'main' 2026-03-28 04:25:41.298121 | orchestrator | Starting galaxy collection install process 2026-03-28 04:25:41.298224 | orchestrator | Process install dependency map 2026-03-28 04:25:41.298240 | orchestrator | Starting collection install process 2026-03-28 04:25:41.298253 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-28 04:25:41.298267 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-28 04:25:41.298278 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-28 04:25:41.877834 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-205223j6pbsb8a/tmphrzbhemm/ansible-playbooks-managern55do_4_'... 2026-03-28 04:25:42.507321 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-28 04:25:42.508180 | orchestrator | Already on 'main' 2026-03-28 04:25:42.777371 | orchestrator | Starting galaxy collection install process 2026-03-28 04:25:42.777511 | orchestrator | Process install dependency map 2026-03-28 04:25:42.777529 | orchestrator | Starting collection install process 2026-03-28 04:25:42.777539 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-28 04:25:42.777549 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-28 04:25:42.777557 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-28 04:25:43.464538 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-28 04:25:43.464640 | orchestrator | -vvvv to see details 2026-03-28 04:25:43.878453 | orchestrator | 2026-03-28 04:25:43.878556 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-28 04:25:43.878578 | orchestrator | 2026-03-28 04:25:43.878598 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 04:25:48.978451 | orchestrator | ok: [testbed-manager] 2026-03-28 04:25:48.978555 | orchestrator | 2026-03-28 04:25:48.978567 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-28 04:25:49.060878 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 04:25:49.060977 | orchestrator | 2026-03-28 04:25:49.061000 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-28 04:25:50.905295 | orchestrator | ok: [testbed-manager] 2026-03-28 04:25:50.905396 | orchestrator | 2026-03-28 04:25:50.905413 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-28 04:25:50.968491 | orchestrator | ok: [testbed-manager] 2026-03-28 04:25:50.968632 | orchestrator | 2026-03-28 04:25:50.968652 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-28 04:25:51.039153 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-28 04:25:51.039246 | orchestrator | 2026-03-28 04:25:51.039260 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-28 04:25:55.480220 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-28 04:25:55.480320 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-28 04:25:55.480330 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-28 04:25:55.480346 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-28 04:25:55.480356 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-28 04:25:55.480367 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-28 04:25:55.480377 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-28 04:25:55.480387 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-28 04:25:55.480398 | orchestrator | 2026-03-28 04:25:55.480409 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-28 04:25:56.661877 | orchestrator | ok: [testbed-manager] 2026-03-28 04:25:56.662126 | orchestrator | 2026-03-28 04:25:56.662148 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-28 04:25:57.651983 | orchestrator | ok: [testbed-manager] 2026-03-28 04:25:57.652119 | orchestrator | 2026-03-28 04:25:57.652152 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-28 04:25:57.739438 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-28 04:25:57.739610 | orchestrator | 2026-03-28 04:25:57.739625 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-28 04:25:59.657414 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-28 04:25:59.657549 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-28 04:25:59.657573 | orchestrator | 2026-03-28 04:25:59.657594 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-28 04:26:00.626822 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:00.626943 | orchestrator | 2026-03-28 04:26:00.626957 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-28 04:26:00.683166 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:26:00.683253 | orchestrator | 2026-03-28 04:26:00.683266 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-28 04:26:00.776552 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-28 04:26:00.776667 | orchestrator | 2026-03-28 04:26:00.776692 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-28 04:26:01.805499 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:01.805651 | orchestrator | 2026-03-28 04:26:01.805680 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-28 04:26:01.885548 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-28 04:26:01.885646 | orchestrator | 2026-03-28 04:26:01.885658 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-28 04:26:03.844556 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-28 04:26:03.844660 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-28 04:26:03.844675 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:03.844689 | orchestrator | 2026-03-28 04:26:03.844701 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-28 04:26:04.832240 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:04.833052 | orchestrator | 2026-03-28 04:26:04.833086 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-28 04:26:04.899388 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:26:04.899486 | orchestrator | 2026-03-28 04:26:04.899502 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-28 04:26:05.020971 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-28 04:26:05.021071 | orchestrator | 2026-03-28 04:26:05.021087 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-28 04:26:05.770836 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:05.770991 | orchestrator | 2026-03-28 04:26:05.771011 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-28 04:26:06.360990 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:06.361110 | orchestrator | 2026-03-28 04:26:06.361127 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-28 04:26:08.307445 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-28 04:26:08.307572 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-28 04:26:08.307599 | orchestrator | 2026-03-28 04:26:08.307620 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-28 04:26:09.480180 | orchestrator | changed: [testbed-manager] 2026-03-28 04:26:09.480265 | orchestrator | 2026-03-28 04:26:09.480274 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-28 04:26:10.080797 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:10.081737 | orchestrator | 2026-03-28 04:26:10.081774 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-28 04:26:10.650007 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:10.650262 | orchestrator | 2026-03-28 04:26:10.650305 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-28 04:26:10.707095 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:26:10.707200 | orchestrator | 2026-03-28 04:26:10.707219 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-28 04:26:10.799617 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-28 04:26:10.799721 | orchestrator | 2026-03-28 04:26:10.799737 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-28 04:26:10.870848 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:10.871011 | orchestrator | 2026-03-28 04:26:10.871030 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-28 04:26:13.856133 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-28 04:26:13.856240 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-28 04:26:13.856256 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-28 04:26:13.856268 | orchestrator | 2026-03-28 04:26:13.856281 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-28 04:26:14.902755 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:14.902831 | orchestrator | 2026-03-28 04:26:14.902843 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-28 04:26:15.953815 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:15.953998 | orchestrator | 2026-03-28 04:26:15.954083 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-28 04:26:16.978613 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:16.978723 | orchestrator | 2026-03-28 04:26:16.978742 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-28 04:26:17.072666 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-28 04:26:17.072772 | orchestrator | 2026-03-28 04:26:17.072790 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-28 04:26:17.117140 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:17.117230 | orchestrator | 2026-03-28 04:26:17.117240 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-28 04:26:18.112376 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-28 04:26:18.112449 | orchestrator | 2026-03-28 04:26:18.112457 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-28 04:26:18.211193 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-28 04:26:18.211265 | orchestrator | 2026-03-28 04:26:18.211272 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-28 04:26:19.263959 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:19.264057 | orchestrator | 2026-03-28 04:26:19.264070 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-28 04:26:20.410377 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:20.410445 | orchestrator | 2026-03-28 04:26:20.410452 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-28 04:26:20.491181 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:26:20.491271 | orchestrator | 2026-03-28 04:26:20.491283 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-28 04:26:20.570332 | orchestrator | ok: [testbed-manager] 2026-03-28 04:26:20.570462 | orchestrator | 2026-03-28 04:26:20.570489 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-28 04:26:22.103859 | orchestrator | changed: [testbed-manager] 2026-03-28 04:26:22.103999 | orchestrator | 2026-03-28 04:26:22.104013 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-28 04:27:37.124653 | orchestrator | changed: [testbed-manager] 2026-03-28 04:27:37.124729 | orchestrator | 2026-03-28 04:27:37.124736 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-28 04:27:38.494552 | orchestrator | ok: [testbed-manager] 2026-03-28 04:27:38.494655 | orchestrator | 2026-03-28 04:27:38.494673 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-28 04:27:38.561176 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:27:38.561273 | orchestrator | 2026-03-28 04:27:38.561289 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-28 04:27:39.444429 | orchestrator | ok: [testbed-manager] 2026-03-28 04:27:39.444514 | orchestrator | 2026-03-28 04:27:39.444522 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-28 04:27:39.522102 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:27:39.522170 | orchestrator | 2026-03-28 04:27:39.522178 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 04:27:39.522183 | orchestrator | 2026-03-28 04:27:39.522188 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-28 04:27:57.687604 | orchestrator | changed: [testbed-manager] 2026-03-28 04:27:57.687743 | orchestrator | 2026-03-28 04:27:57.687836 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-28 04:28:57.775492 | orchestrator | Pausing for 60 seconds 2026-03-28 04:28:57.775653 | orchestrator | changed: [testbed-manager] 2026-03-28 04:28:57.775679 | orchestrator | 2026-03-28 04:28:57.775756 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-28 04:28:57.825421 | orchestrator | ok: [testbed-manager] 2026-03-28 04:28:57.825511 | orchestrator | 2026-03-28 04:28:57.825523 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-28 04:29:01.964171 | orchestrator | changed: [testbed-manager] 2026-03-28 04:29:01.964300 | orchestrator | 2026-03-28 04:29:01.964327 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-28 04:30:04.806850 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-28 04:30:04.806940 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-28 04:30:04.806951 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-28 04:30:04.806961 | orchestrator | changed: [testbed-manager] 2026-03-28 04:30:04.806971 | orchestrator | 2026-03-28 04:30:04.806979 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-28 04:30:17.813462 | orchestrator | changed: [testbed-manager] 2026-03-28 04:30:17.813558 | orchestrator | 2026-03-28 04:30:17.813569 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-28 04:30:17.898407 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-28 04:30:17.898530 | orchestrator | 2026-03-28 04:30:17.898546 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 04:30:17.898581 | orchestrator | 2026-03-28 04:30:17.898591 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-28 04:30:17.961667 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:30:17.961779 | orchestrator | 2026-03-28 04:30:17.961807 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-28 04:30:18.051832 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-28 04:30:18.051907 | orchestrator | 2026-03-28 04:30:18.051928 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-28 04:30:19.210721 | orchestrator | changed: [testbed-manager] 2026-03-28 04:30:19.210830 | orchestrator | 2026-03-28 04:30:19.210848 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-28 04:30:23.022283 | orchestrator | ok: [testbed-manager] 2026-03-28 04:30:23.022392 | orchestrator | 2026-03-28 04:30:23.022409 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-28 04:30:23.106368 | orchestrator | ok: [testbed-manager] => { 2026-03-28 04:30:23.106467 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-28 04:30:23.106486 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-28 04:30:23.106499 | orchestrator | "Checking running containers against expected versions...", 2026-03-28 04:30:23.106512 | orchestrator | "", 2026-03-28 04:30:23.106523 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-28 04:30:23.106535 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-28 04:30:23.106547 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106558 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-28 04:30:23.106570 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106581 | orchestrator | "", 2026-03-28 04:30:23.106591 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-28 04:30:23.106650 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-28 04:30:23.106664 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106676 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-28 04:30:23.106686 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106697 | orchestrator | "", 2026-03-28 04:30:23.106708 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-28 04:30:23.106719 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-28 04:30:23.106730 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106740 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-28 04:30:23.106751 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106761 | orchestrator | "", 2026-03-28 04:30:23.106771 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-28 04:30:23.106782 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-28 04:30:23.106793 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106803 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-28 04:30:23.106815 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106822 | orchestrator | "", 2026-03-28 04:30:23.106828 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-28 04:30:23.106835 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-28 04:30:23.106842 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106848 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-28 04:30:23.106854 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106861 | orchestrator | "", 2026-03-28 04:30:23.106867 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-28 04:30:23.106896 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.106904 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106911 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.106919 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106926 | orchestrator | "", 2026-03-28 04:30:23.106933 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-28 04:30:23.106941 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 04:30:23.106948 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.106955 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 04:30:23.106962 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.106970 | orchestrator | "", 2026-03-28 04:30:23.106977 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-28 04:30:23.106984 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 04:30:23.106992 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107007 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 04:30:23.107014 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107021 | orchestrator | "", 2026-03-28 04:30:23.107029 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-28 04:30:23.107036 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-28 04:30:23.107044 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107051 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-28 04:30:23.107058 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107065 | orchestrator | "", 2026-03-28 04:30:23.107076 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-28 04:30:23.107083 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 04:30:23.107091 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107098 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 04:30:23.107105 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107112 | orchestrator | "", 2026-03-28 04:30:23.107119 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-28 04:30:23.107126 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107133 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107140 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107147 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107155 | orchestrator | "", 2026-03-28 04:30:23.107162 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-28 04:30:23.107169 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107176 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107183 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107190 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107198 | orchestrator | "", 2026-03-28 04:30:23.107205 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-28 04:30:23.107212 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107219 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107226 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107232 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107238 | orchestrator | "", 2026-03-28 04:30:23.107244 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-28 04:30:23.107251 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107257 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107263 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107285 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107291 | orchestrator | "", 2026-03-28 04:30:23.107298 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-28 04:30:23.107304 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107316 | orchestrator | " Enabled: true", 2026-03-28 04:30:23.107322 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-28 04:30:23.107328 | orchestrator | " Status: ✅ MATCH", 2026-03-28 04:30:23.107334 | orchestrator | "", 2026-03-28 04:30:23.107341 | orchestrator | "=== Summary ===", 2026-03-28 04:30:23.107347 | orchestrator | "Errors (version mismatches): 0", 2026-03-28 04:30:23.107353 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-28 04:30:23.107359 | orchestrator | "", 2026-03-28 04:30:23.107366 | orchestrator | "✅ All running containers match expected versions!" 2026-03-28 04:30:23.107372 | orchestrator | ] 2026-03-28 04:30:23.107379 | orchestrator | } 2026-03-28 04:30:23.107385 | orchestrator | 2026-03-28 04:30:23.107391 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-28 04:30:23.171040 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:30:23.171118 | orchestrator | 2026-03-28 04:30:23.171127 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:30:23.171136 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-28 04:30:23.171143 | orchestrator | 2026-03-28 04:30:35.862831 | orchestrator | 2026-03-28 04:30:35 | INFO  | Task 31839cdb-a4ea-4675-a559-8f6d3febb077 (sync inventory) is running in background. Output coming soon. 2026-03-28 04:31:05.067167 | orchestrator | 2026-03-28 04:30:37 | INFO  | Starting group_vars file reorganization 2026-03-28 04:31:05.067256 | orchestrator | 2026-03-28 04:30:37 | INFO  | Moved 0 file(s) to their respective directories 2026-03-28 04:31:05.067270 | orchestrator | 2026-03-28 04:30:37 | INFO  | Group_vars file reorganization completed 2026-03-28 04:31:05.067279 | orchestrator | 2026-03-28 04:30:40 | INFO  | Starting variable preparation from inventory 2026-03-28 04:31:05.067287 | orchestrator | 2026-03-28 04:30:43 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-28 04:31:05.067295 | orchestrator | 2026-03-28 04:30:43 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-28 04:31:05.067303 | orchestrator | 2026-03-28 04:30:43 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-28 04:31:05.067310 | orchestrator | 2026-03-28 04:30:43 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-28 04:31:05.067318 | orchestrator | 2026-03-28 04:30:43 | INFO  | Variable preparation completed 2026-03-28 04:31:05.067325 | orchestrator | 2026-03-28 04:30:45 | INFO  | Starting inventory overwrite handling 2026-03-28 04:31:05.067332 | orchestrator | 2026-03-28 04:30:45 | INFO  | Handling group overwrites in 99-overwrite 2026-03-28 04:31:05.067340 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removing group frr:children from 60-generic 2026-03-28 04:31:05.067347 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-28 04:31:05.067355 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-28 04:31:05.067362 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-28 04:31:05.067369 | orchestrator | 2026-03-28 04:30:45 | INFO  | Handling group overwrites in 20-roles 2026-03-28 04:31:05.067377 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-28 04:31:05.067384 | orchestrator | 2026-03-28 04:30:45 | INFO  | Removed 5 group(s) in total 2026-03-28 04:31:05.067392 | orchestrator | 2026-03-28 04:30:45 | INFO  | Inventory overwrite handling completed 2026-03-28 04:31:05.067399 | orchestrator | 2026-03-28 04:30:46 | INFO  | Starting merge of inventory files 2026-03-28 04:31:05.067406 | orchestrator | 2026-03-28 04:30:46 | INFO  | Inventory files merged successfully 2026-03-28 04:31:05.067436 | orchestrator | 2026-03-28 04:30:51 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-28 04:31:05.067445 | orchestrator | 2026-03-28 04:31:03 | INFO  | Successfully wrote ClusterShell configuration 2026-03-28 04:31:05.412200 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 04:31:05.412275 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 04:31:05.412285 | orchestrator | + local max_attempts=60 2026-03-28 04:31:05.412295 | orchestrator | + local name=kolla-ansible 2026-03-28 04:31:05.412304 | orchestrator | + local attempt_num=1 2026-03-28 04:31:05.412788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 04:31:05.446064 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 04:31:05.446164 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 04:31:05.446181 | orchestrator | + local max_attempts=60 2026-03-28 04:31:05.446193 | orchestrator | + local name=osism-ansible 2026-03-28 04:31:05.446205 | orchestrator | + local attempt_num=1 2026-03-28 04:31:05.446350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 04:31:05.475637 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 04:31:05.475730 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-28 04:31:05.670869 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-28 04:31:05.670985 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-28 04:31:05.671014 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-28 04:31:05.671062 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-28 04:31:05.671093 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-28 04:31:05.671112 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-28 04:31:05.671131 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-28 04:31:05.671150 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-28 04:31:05.671169 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 12 seconds ago 2026-03-28 04:31:05.671188 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-28 04:31:05.671206 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-28 04:31:05.671224 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-28 04:31:05.671244 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-28 04:31:05.671292 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-28 04:31:05.671313 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-28 04:31:05.671333 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-28 04:31:05.677973 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-28 04:31:05.678064 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-28 04:31:05.678078 | orchestrator | + osism apply facts 2026-03-28 04:31:17.749476 | orchestrator | 2026-03-28 04:31:17 | INFO  | Task 60a47d2d-1df2-44f9-b054-cffa8939d398 (facts) was prepared for execution. 2026-03-28 04:31:17.749617 | orchestrator | 2026-03-28 04:31:17 | INFO  | It takes a moment until task 60a47d2d-1df2-44f9-b054-cffa8939d398 (facts) has been started and output is visible here. 2026-03-28 04:31:41.590126 | orchestrator | 2026-03-28 04:31:41.590249 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 04:31:41.590266 | orchestrator | 2026-03-28 04:31:41.590278 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 04:31:41.590289 | orchestrator | Saturday 28 March 2026 04:31:24 +0000 (0:00:02.109) 0:00:02.109 ******** 2026-03-28 04:31:41.590301 | orchestrator | ok: [testbed-manager] 2026-03-28 04:31:41.590314 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:31:41.590325 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:31:41.590336 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:31:41.590347 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:31:41.590358 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:31:41.590369 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:31:41.590380 | orchestrator | 2026-03-28 04:31:41.590392 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 04:31:41.590403 | orchestrator | Saturday 28 March 2026 04:31:27 +0000 (0:00:03.583) 0:00:05.693 ******** 2026-03-28 04:31:41.590414 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:31:41.590427 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:31:41.590438 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:31:41.590449 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:31:41.590460 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:31:41.590471 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:31:41.590482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:31:41.590493 | orchestrator | 2026-03-28 04:31:41.590596 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 04:31:41.590612 | orchestrator | 2026-03-28 04:31:41.590625 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 04:31:41.590637 | orchestrator | Saturday 28 March 2026 04:31:30 +0000 (0:00:02.613) 0:00:08.306 ******** 2026-03-28 04:31:41.590650 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:31:41.590663 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:31:41.590676 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:31:41.590689 | orchestrator | ok: [testbed-manager] 2026-03-28 04:31:41.590703 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:31:41.590715 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:31:41.590727 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:31:41.590738 | orchestrator | 2026-03-28 04:31:41.590749 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 04:31:41.590760 | orchestrator | 2026-03-28 04:31:41.590771 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 04:31:41.590783 | orchestrator | Saturday 28 March 2026 04:31:38 +0000 (0:00:07.936) 0:00:16.242 ******** 2026-03-28 04:31:41.590794 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:31:41.590829 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:31:41.590841 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:31:41.590852 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:31:41.590863 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:31:41.590874 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:31:41.590885 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:31:41.590896 | orchestrator | 2026-03-28 04:31:41.590907 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:31:41.590918 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590930 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590941 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590952 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590963 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590974 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590985 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:31:41.590996 | orchestrator | 2026-03-28 04:31:41.591007 | orchestrator | 2026-03-28 04:31:41.591018 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:31:41.591029 | orchestrator | Saturday 28 March 2026 04:31:41 +0000 (0:00:02.702) 0:00:18.945 ******** 2026-03-28 04:31:41.591040 | orchestrator | =============================================================================== 2026-03-28 04:31:41.591051 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.94s 2026-03-28 04:31:41.591062 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.58s 2026-03-28 04:31:41.591073 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.70s 2026-03-28 04:31:41.591084 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.61s 2026-03-28 04:31:41.910355 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-28 04:31:42.010608 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 04:31:42.011130 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-28 04:31:42.036727 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-28 04:31:42.036830 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-28 04:31:42.043741 | orchestrator | + set -e 2026-03-28 04:31:42.043811 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-28 04:31:42.043825 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 04:31:42.052034 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-28 04:31:42.062856 | orchestrator | 2026-03-28 04:31:42.062903 | orchestrator | # UPGRADE SERVICES 2026-03-28 04:31:42.062924 | orchestrator | 2026-03-28 04:31:42.062945 | orchestrator | + set -e 2026-03-28 04:31:42.062964 | orchestrator | + echo 2026-03-28 04:31:42.062979 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-28 04:31:42.062990 | orchestrator | + echo 2026-03-28 04:31:42.063001 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 04:31:42.064028 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 04:31:42.064062 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 04:31:42.064073 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 04:31:42.064084 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 04:31:42.064096 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 04:31:42.064109 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 04:31:42.064120 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:31:42.064162 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:31:42.064181 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 04:31:42.064199 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 04:31:42.064217 | orchestrator | ++ export ARA=false 2026-03-28 04:31:42.064236 | orchestrator | ++ ARA=false 2026-03-28 04:31:42.064254 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 04:31:42.064269 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 04:31:42.064280 | orchestrator | ++ export TEMPEST=false 2026-03-28 04:31:42.064291 | orchestrator | ++ TEMPEST=false 2026-03-28 04:31:42.064309 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 04:31:42.064329 | orchestrator | ++ IS_ZUUL=true 2026-03-28 04:31:42.064348 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:31:42.064364 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:31:42.064376 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 04:31:42.064387 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 04:31:42.064397 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 04:31:42.064408 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 04:31:42.064419 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 04:31:42.064430 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 04:31:42.064441 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 04:31:42.064452 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 04:31:42.064463 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-28 04:31:42.064474 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-28 04:31:42.064484 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-28 04:31:42.064496 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-28 04:31:42.064565 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-28 04:31:42.071355 | orchestrator | + set -e 2026-03-28 04:31:42.071412 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:31:42.072030 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:31:42.072088 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:31:42.072103 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:31:42.072117 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:31:42.072294 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 04:31:42.072309 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 04:31:42.072320 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 04:31:42.072331 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 04:31:42.072342 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 04:31:42.072361 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 04:31:42.072373 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 04:31:42.072384 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 04:31:42.072396 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 04:31:42.072424 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 04:31:42.072436 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 04:31:42.072447 | orchestrator | ++ export ARA=false 2026-03-28 04:31:42.072458 | orchestrator | ++ ARA=false 2026-03-28 04:31:42.072469 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 04:31:42.072480 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 04:31:42.072491 | orchestrator | ++ export TEMPEST=false 2026-03-28 04:31:42.072502 | orchestrator | ++ TEMPEST=false 2026-03-28 04:31:42.072539 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 04:31:42.072552 | orchestrator | ++ IS_ZUUL=true 2026-03-28 04:31:42.072563 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:31:42.072574 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 04:31:42.072585 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 04:31:42.072596 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 04:31:42.072692 | orchestrator | 2026-03-28 04:31:42.072708 | orchestrator | # PULL IMAGES 2026-03-28 04:31:42.072721 | orchestrator | 2026-03-28 04:31:42.072732 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 04:31:42.072743 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 04:31:42.072755 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 04:31:42.072765 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 04:31:42.072777 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 04:31:42.072788 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 04:31:42.072799 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-28 04:31:42.072810 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-28 04:31:42.072821 | orchestrator | + echo 2026-03-28 04:31:42.072832 | orchestrator | + echo '# PULL IMAGES' 2026-03-28 04:31:42.072843 | orchestrator | + echo 2026-03-28 04:31:42.073223 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-28 04:31:42.134235 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 04:31:42.134319 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-28 04:31:44.136042 | orchestrator | 2026-03-28 04:31:44 | INFO  | Trying to run play pull-images in environment custom 2026-03-28 04:31:54.291102 | orchestrator | 2026-03-28 04:31:54 | INFO  | Task 5649f249-bf01-4103-8ce4-4b9a432dc095 (pull-images) was prepared for execution. 2026-03-28 04:31:54.291218 | orchestrator | 2026-03-28 04:31:54 | INFO  | Task 5649f249-bf01-4103-8ce4-4b9a432dc095 is running in background. No more output. Check ARA for logs. 2026-03-28 04:31:54.657836 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-28 04:31:54.668212 | orchestrator | + set -e 2026-03-28 04:31:54.668828 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:31:54.668872 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:31:54.668884 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:31:54.668893 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:31:54.668901 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:31:54.668909 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 04:31:54.670527 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 04:31:54.682690 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:31:54.682734 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-28 04:31:54.682817 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-28 04:31:54.730835 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-28 04:31:54.730917 | orchestrator | + osism apply frr 2026-03-28 04:32:07.041457 | orchestrator | 2026-03-28 04:32:07 | INFO  | Task 795aec80-8137-4f88-b721-45a4a80c7a9f (frr) was prepared for execution. 2026-03-28 04:32:07.041641 | orchestrator | 2026-03-28 04:32:07 | INFO  | It takes a moment until task 795aec80-8137-4f88-b721-45a4a80c7a9f (frr) has been started and output is visible here. 2026-03-28 04:32:39.042426 | orchestrator | 2026-03-28 04:32:39.042641 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-28 04:32:39.042721 | orchestrator | 2026-03-28 04:32:39.042744 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-28 04:32:39.042764 | orchestrator | Saturday 28 March 2026 04:32:15 +0000 (0:00:03.767) 0:00:03.767 ******** 2026-03-28 04:32:39.042784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 04:32:39.042805 | orchestrator | 2026-03-28 04:32:39.042823 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-28 04:32:39.042842 | orchestrator | Saturday 28 March 2026 04:32:17 +0000 (0:00:01.802) 0:00:05.570 ******** 2026-03-28 04:32:39.042862 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.042883 | orchestrator | 2026-03-28 04:32:39.042903 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-28 04:32:39.042922 | orchestrator | Saturday 28 March 2026 04:32:19 +0000 (0:00:02.101) 0:00:07.672 ******** 2026-03-28 04:32:39.042942 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.042960 | orchestrator | 2026-03-28 04:32:39.042977 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-28 04:32:39.042995 | orchestrator | Saturday 28 March 2026 04:32:22 +0000 (0:00:02.952) 0:00:10.625 ******** 2026-03-28 04:32:39.043015 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.043035 | orchestrator | 2026-03-28 04:32:39.043055 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-28 04:32:39.043075 | orchestrator | Saturday 28 March 2026 04:32:24 +0000 (0:00:01.943) 0:00:12.568 ******** 2026-03-28 04:32:39.043096 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.043115 | orchestrator | 2026-03-28 04:32:39.043135 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-28 04:32:39.043148 | orchestrator | Saturday 28 March 2026 04:32:26 +0000 (0:00:01.972) 0:00:14.540 ******** 2026-03-28 04:32:39.043159 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.043170 | orchestrator | 2026-03-28 04:32:39.043182 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-28 04:32:39.043194 | orchestrator | Saturday 28 March 2026 04:32:28 +0000 (0:00:02.391) 0:00:16.932 ******** 2026-03-28 04:32:39.043205 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:32:39.043247 | orchestrator | 2026-03-28 04:32:39.043258 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-28 04:32:39.043270 | orchestrator | Saturday 28 March 2026 04:32:29 +0000 (0:00:01.151) 0:00:18.083 ******** 2026-03-28 04:32:39.043281 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:32:39.043292 | orchestrator | 2026-03-28 04:32:39.043303 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-28 04:32:39.043314 | orchestrator | Saturday 28 March 2026 04:32:30 +0000 (0:00:01.164) 0:00:19.248 ******** 2026-03-28 04:32:39.043325 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.043336 | orchestrator | 2026-03-28 04:32:39.043348 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-28 04:32:39.043359 | orchestrator | Saturday 28 March 2026 04:32:32 +0000 (0:00:01.894) 0:00:21.142 ******** 2026-03-28 04:32:39.043370 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-28 04:32:39.043381 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-28 04:32:39.043394 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-28 04:32:39.043528 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-28 04:32:39.043546 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-28 04:32:39.043558 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-28 04:32:39.043569 | orchestrator | 2026-03-28 04:32:39.043580 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-28 04:32:39.043591 | orchestrator | Saturday 28 March 2026 04:32:36 +0000 (0:00:03.529) 0:00:24.672 ******** 2026-03-28 04:32:39.043603 | orchestrator | ok: [testbed-manager] 2026-03-28 04:32:39.043614 | orchestrator | 2026-03-28 04:32:39.043625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:32:39.043636 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 04:32:39.043647 | orchestrator | 2026-03-28 04:32:39.043658 | orchestrator | 2026-03-28 04:32:39.043669 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:32:39.043680 | orchestrator | Saturday 28 March 2026 04:32:38 +0000 (0:00:02.417) 0:00:27.090 ******** 2026-03-28 04:32:39.043690 | orchestrator | =============================================================================== 2026-03-28 04:32:39.043701 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.53s 2026-03-28 04:32:39.043712 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.95s 2026-03-28 04:32:39.043723 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.42s 2026-03-28 04:32:39.043733 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.39s 2026-03-28 04:32:39.043744 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.10s 2026-03-28 04:32:39.043755 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.97s 2026-03-28 04:32:39.043766 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.94s 2026-03-28 04:32:39.043777 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.89s 2026-03-28 04:32:39.043811 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.80s 2026-03-28 04:32:39.043822 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.16s 2026-03-28 04:32:39.043833 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.15s 2026-03-28 04:32:39.356927 | orchestrator | + osism apply kubernetes 2026-03-28 04:32:41.470924 | orchestrator | 2026-03-28 04:32:41 | INFO  | Task 00125397-e1b9-4f30-874e-c2fa6deeaeff (kubernetes) was prepared for execution. 2026-03-28 04:32:41.471204 | orchestrator | 2026-03-28 04:32:41 | INFO  | It takes a moment until task 00125397-e1b9-4f30-874e-c2fa6deeaeff (kubernetes) has been started and output is visible here. 2026-03-28 04:33:25.003924 | orchestrator | 2026-03-28 04:33:25.004020 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-28 04:33:25.004031 | orchestrator | 2026-03-28 04:33:25.004039 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-28 04:33:25.004047 | orchestrator | Saturday 28 March 2026 04:32:47 +0000 (0:00:02.001) 0:00:02.001 ******** 2026-03-28 04:33:25.004055 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004063 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004070 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004077 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004084 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004092 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004099 | orchestrator | 2026-03-28 04:33:25.004106 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-28 04:33:25.004113 | orchestrator | Saturday 28 March 2026 04:32:52 +0000 (0:00:04.436) 0:00:06.438 ******** 2026-03-28 04:33:25.004120 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004128 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004135 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004148 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004155 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004162 | orchestrator | 2026-03-28 04:33:25.004169 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-28 04:33:25.004176 | orchestrator | Saturday 28 March 2026 04:32:54 +0000 (0:00:01.853) 0:00:08.292 ******** 2026-03-28 04:33:25.004183 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004190 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004197 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004204 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004211 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004218 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004225 | orchestrator | 2026-03-28 04:33:25.004232 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-28 04:33:25.004239 | orchestrator | Saturday 28 March 2026 04:32:56 +0000 (0:00:01.965) 0:00:10.257 ******** 2026-03-28 04:33:25.004246 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004253 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004261 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004268 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004275 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004282 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004289 | orchestrator | 2026-03-28 04:33:25.004296 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-28 04:33:25.004303 | orchestrator | Saturday 28 March 2026 04:32:59 +0000 (0:00:03.317) 0:00:13.575 ******** 2026-03-28 04:33:25.004310 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004317 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004324 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004331 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004338 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004345 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004351 | orchestrator | 2026-03-28 04:33:25.004358 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-28 04:33:25.004366 | orchestrator | Saturday 28 March 2026 04:33:02 +0000 (0:00:02.622) 0:00:16.197 ******** 2026-03-28 04:33:25.004372 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004379 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004386 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004424 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004432 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004456 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004463 | orchestrator | 2026-03-28 04:33:25.004470 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-28 04:33:25.004477 | orchestrator | Saturday 28 March 2026 04:33:04 +0000 (0:00:02.153) 0:00:18.351 ******** 2026-03-28 04:33:25.004484 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004491 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004499 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004505 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004512 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004519 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004526 | orchestrator | 2026-03-28 04:33:25.004533 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-28 04:33:25.004540 | orchestrator | Saturday 28 March 2026 04:33:06 +0000 (0:00:01.818) 0:00:20.170 ******** 2026-03-28 04:33:25.004547 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004554 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004561 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004567 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004574 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004581 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004588 | orchestrator | 2026-03-28 04:33:25.004595 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-28 04:33:25.004608 | orchestrator | Saturday 28 March 2026 04:33:07 +0000 (0:00:01.652) 0:00:21.823 ******** 2026-03-28 04:33:25.004614 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004620 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004633 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004640 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004647 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004654 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004661 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004668 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004675 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004682 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004709 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004716 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004723 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004731 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 04:33:25.004737 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 04:33:25.004744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004751 | orchestrator | 2026-03-28 04:33:25.004758 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-28 04:33:25.004765 | orchestrator | Saturday 28 March 2026 04:33:09 +0000 (0:00:01.786) 0:00:23.609 ******** 2026-03-28 04:33:25.004772 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004779 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004786 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004800 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.004806 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.004813 | orchestrator | 2026-03-28 04:33:25.004825 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-28 04:33:25.004834 | orchestrator | Saturday 28 March 2026 04:33:11 +0000 (0:00:02.114) 0:00:25.724 ******** 2026-03-28 04:33:25.004841 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004848 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004855 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004862 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004868 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004875 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004882 | orchestrator | 2026-03-28 04:33:25.004889 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-28 04:33:25.004896 | orchestrator | Saturday 28 March 2026 04:33:13 +0000 (0:00:01.972) 0:00:27.697 ******** 2026-03-28 04:33:25.004903 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:33:25.004910 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:33:25.004917 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:33:25.004923 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:33:25.004930 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:33:25.004937 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:33:25.004944 | orchestrator | 2026-03-28 04:33:25.004951 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-28 04:33:25.004958 | orchestrator | Saturday 28 March 2026 04:33:16 +0000 (0:00:02.806) 0:00:30.504 ******** 2026-03-28 04:33:25.004965 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.004972 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.004979 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.004986 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.004993 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.005000 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.005006 | orchestrator | 2026-03-28 04:33:25.005013 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-28 04:33:25.005020 | orchestrator | Saturday 28 March 2026 04:33:18 +0000 (0:00:01.991) 0:00:32.495 ******** 2026-03-28 04:33:25.005027 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.005034 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.005041 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.005048 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.005055 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.005062 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.005068 | orchestrator | 2026-03-28 04:33:25.005075 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-28 04:33:25.005083 | orchestrator | Saturday 28 March 2026 04:33:20 +0000 (0:00:02.262) 0:00:34.758 ******** 2026-03-28 04:33:25.005090 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.005097 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.005104 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.005111 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.005118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.005125 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.005131 | orchestrator | 2026-03-28 04:33:25.005141 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-28 04:33:25.005148 | orchestrator | Saturday 28 March 2026 04:33:22 +0000 (0:00:01.775) 0:00:36.533 ******** 2026-03-28 04:33:25.005155 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-28 04:33:25.005163 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-28 04:33:25.005170 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.005176 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-28 04:33:25.005183 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-28 04:33:25.005190 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.005197 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-28 04:33:25.005204 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-28 04:33:25.005218 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:33:25.005225 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-28 04:33:25.005232 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-28 04:33:25.005239 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:33:25.005245 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-28 04:33:25.005252 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-28 04:33:25.005259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:33:25.005266 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-28 04:33:25.005301 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-28 04:33:25.005308 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:33:25.005315 | orchestrator | 2026-03-28 04:33:25.005322 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-28 04:33:25.005330 | orchestrator | Saturday 28 March 2026 04:33:24 +0000 (0:00:02.051) 0:00:38.584 ******** 2026-03-28 04:33:25.005337 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:33:25.005357 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:33:25.005368 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:35:02.016794 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.016899 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.016910 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.016917 | orchestrator | 2026-03-28 04:35:02.016927 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-28 04:35:02.016936 | orchestrator | Saturday 28 March 2026 04:33:26 +0000 (0:00:01.743) 0:00:40.328 ******** 2026-03-28 04:35:02.016944 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:35:02.016952 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:35:02.016959 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:35:02.016966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.016973 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.016980 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.016987 | orchestrator | 2026-03-28 04:35:02.016994 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-28 04:35:02.017002 | orchestrator | 2026-03-28 04:35:02.017009 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-28 04:35:02.017017 | orchestrator | Saturday 28 March 2026 04:33:28 +0000 (0:00:02.681) 0:00:43.009 ******** 2026-03-28 04:35:02.017025 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017033 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017040 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017047 | orchestrator | 2026-03-28 04:35:02.017055 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-28 04:35:02.017078 | orchestrator | Saturday 28 March 2026 04:33:30 +0000 (0:00:01.922) 0:00:44.932 ******** 2026-03-28 04:35:02.017086 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017093 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017100 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017107 | orchestrator | 2026-03-28 04:35:02.017114 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-28 04:35:02.017121 | orchestrator | Saturday 28 March 2026 04:33:33 +0000 (0:00:02.230) 0:00:47.163 ******** 2026-03-28 04:35:02.017129 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.017136 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:02.017143 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:02.017150 | orchestrator | 2026-03-28 04:35:02.017161 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-28 04:35:02.017168 | orchestrator | Saturday 28 March 2026 04:33:35 +0000 (0:00:02.283) 0:00:49.447 ******** 2026-03-28 04:35:02.017176 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017183 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017190 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017197 | orchestrator | 2026-03-28 04:35:02.017223 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-28 04:35:02.017231 | orchestrator | Saturday 28 March 2026 04:33:37 +0000 (0:00:01.968) 0:00:51.415 ******** 2026-03-28 04:35:02.017238 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.017246 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017254 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017261 | orchestrator | 2026-03-28 04:35:02.017269 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-28 04:35:02.017277 | orchestrator | Saturday 28 March 2026 04:33:38 +0000 (0:00:01.339) 0:00:52.755 ******** 2026-03-28 04:35:02.017284 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017322 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017330 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017338 | orchestrator | 2026-03-28 04:35:02.017346 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-28 04:35:02.017354 | orchestrator | Saturday 28 March 2026 04:33:40 +0000 (0:00:01.746) 0:00:54.502 ******** 2026-03-28 04:35:02.017362 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017370 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017378 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017387 | orchestrator | 2026-03-28 04:35:02.017395 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-28 04:35:02.017404 | orchestrator | Saturday 28 March 2026 04:33:42 +0000 (0:00:02.176) 0:00:56.679 ******** 2026-03-28 04:35:02.017412 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:35:02.017421 | orchestrator | 2026-03-28 04:35:02.017429 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-28 04:35:02.017438 | orchestrator | Saturday 28 March 2026 04:33:44 +0000 (0:00:01.991) 0:00:58.670 ******** 2026-03-28 04:35:02.017446 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017455 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017463 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017472 | orchestrator | 2026-03-28 04:35:02.017480 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-28 04:35:02.017489 | orchestrator | Saturday 28 March 2026 04:33:47 +0000 (0:00:02.489) 0:01:01.159 ******** 2026-03-28 04:35:02.017497 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017506 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017514 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017522 | orchestrator | 2026-03-28 04:35:02.017529 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-28 04:35:02.017537 | orchestrator | Saturday 28 March 2026 04:33:48 +0000 (0:00:01.634) 0:01:02.794 ******** 2026-03-28 04:35:02.017546 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017553 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017562 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.017570 | orchestrator | 2026-03-28 04:35:02.017578 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-28 04:35:02.017586 | orchestrator | Saturday 28 March 2026 04:33:50 +0000 (0:00:01.893) 0:01:04.687 ******** 2026-03-28 04:35:02.017594 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017603 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017611 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.017619 | orchestrator | 2026-03-28 04:35:02.017627 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-28 04:35:02.017634 | orchestrator | Saturday 28 March 2026 04:33:53 +0000 (0:00:02.439) 0:01:07.127 ******** 2026-03-28 04:35:02.017642 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.017650 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017675 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017686 | orchestrator | 2026-03-28 04:35:02.017695 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-28 04:35:02.017704 | orchestrator | Saturday 28 March 2026 04:33:54 +0000 (0:00:01.382) 0:01:08.509 ******** 2026-03-28 04:35:02.017721 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.017730 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.017739 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.017747 | orchestrator | 2026-03-28 04:35:02.017755 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-28 04:35:02.017762 | orchestrator | Saturday 28 March 2026 04:33:56 +0000 (0:00:01.571) 0:01:10.081 ******** 2026-03-28 04:35:02.017769 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.017777 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:02.017785 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:02.017792 | orchestrator | 2026-03-28 04:35:02.017799 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-28 04:35:02.017807 | orchestrator | Saturday 28 March 2026 04:33:58 +0000 (0:00:02.117) 0:01:12.198 ******** 2026-03-28 04:35:02.017815 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017824 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017832 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017839 | orchestrator | 2026-03-28 04:35:02.017846 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-28 04:35:02.017853 | orchestrator | Saturday 28 March 2026 04:33:59 +0000 (0:00:01.838) 0:01:14.037 ******** 2026-03-28 04:35:02.017860 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017867 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017874 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017881 | orchestrator | 2026-03-28 04:35:02.017888 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-28 04:35:02.017895 | orchestrator | Saturday 28 March 2026 04:34:01 +0000 (0:00:01.444) 0:01:15.482 ******** 2026-03-28 04:35:02.017903 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 04:35:02.017911 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 04:35:02.017919 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 04:35:02.017926 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 04:35:02.017933 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 04:35:02.017940 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 04:35:02.017947 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.017954 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.017961 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.017968 | orchestrator | 2026-03-28 04:35:02.017975 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-28 04:35:02.017983 | orchestrator | Saturday 28 March 2026 04:34:24 +0000 (0:00:23.421) 0:01:38.903 ******** 2026-03-28 04:35:02.017990 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:02.017997 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:02.018004 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:02.018012 | orchestrator | 2026-03-28 04:35:02.018083 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-28 04:35:02.018093 | orchestrator | Saturday 28 March 2026 04:34:26 +0000 (0:00:01.366) 0:01:40.270 ******** 2026-03-28 04:35:02.018101 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.018109 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:02.018118 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:02.018127 | orchestrator | 2026-03-28 04:35:02.018136 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-28 04:35:02.018152 | orchestrator | Saturday 28 March 2026 04:34:28 +0000 (0:00:02.116) 0:01:42.386 ******** 2026-03-28 04:35:02.018160 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.018169 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.018177 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.018184 | orchestrator | 2026-03-28 04:35:02.018193 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-28 04:35:02.018201 | orchestrator | Saturday 28 March 2026 04:34:30 +0000 (0:00:02.283) 0:01:44.669 ******** 2026-03-28 04:35:02.018208 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:02.018216 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.018225 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:02.018233 | orchestrator | 2026-03-28 04:35:02.018241 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-28 04:35:02.018250 | orchestrator | Saturday 28 March 2026 04:34:56 +0000 (0:00:25.908) 0:02:10.578 ******** 2026-03-28 04:35:02.018257 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.018265 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.018273 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.018280 | orchestrator | 2026-03-28 04:35:02.018287 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-28 04:35:02.018323 | orchestrator | Saturday 28 March 2026 04:34:58 +0000 (0:00:01.683) 0:02:12.261 ******** 2026-03-28 04:35:02.018332 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:02.018341 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:02.018349 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:02.018357 | orchestrator | 2026-03-28 04:35:02.018365 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-28 04:35:02.018372 | orchestrator | Saturday 28 March 2026 04:34:59 +0000 (0:00:01.756) 0:02:14.018 ******** 2026-03-28 04:35:02.018380 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:02.018388 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:02.018395 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:02.018403 | orchestrator | 2026-03-28 04:35:02.018421 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-28 04:35:49.976639 | orchestrator | Saturday 28 March 2026 04:35:01 +0000 (0:00:02.017) 0:02:16.036 ******** 2026-03-28 04:35:49.976746 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:49.976757 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:49.976763 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:49.976770 | orchestrator | 2026-03-28 04:35:49.976780 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-28 04:35:49.976789 | orchestrator | Saturday 28 March 2026 04:35:03 +0000 (0:00:01.836) 0:02:17.873 ******** 2026-03-28 04:35:49.976798 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:49.976806 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:49.976814 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:49.976822 | orchestrator | 2026-03-28 04:35:49.976831 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-28 04:35:49.976841 | orchestrator | Saturday 28 March 2026 04:35:05 +0000 (0:00:01.338) 0:02:19.211 ******** 2026-03-28 04:35:49.976850 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:49.976860 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:49.976868 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:49.976873 | orchestrator | 2026-03-28 04:35:49.976879 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-28 04:35:49.976884 | orchestrator | Saturday 28 March 2026 04:35:06 +0000 (0:00:01.656) 0:02:20.868 ******** 2026-03-28 04:35:49.976890 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:49.976895 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:49.976901 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:49.976906 | orchestrator | 2026-03-28 04:35:49.976912 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-28 04:35:49.976917 | orchestrator | Saturday 28 March 2026 04:35:08 +0000 (0:00:01.883) 0:02:22.751 ******** 2026-03-28 04:35:49.976923 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:49.976944 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:49.976950 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:49.976955 | orchestrator | 2026-03-28 04:35:49.976960 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-28 04:35:49.976976 | orchestrator | Saturday 28 March 2026 04:35:10 +0000 (0:00:01.786) 0:02:24.538 ******** 2026-03-28 04:35:49.976981 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:35:49.976986 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:35:49.976991 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:35:49.976996 | orchestrator | 2026-03-28 04:35:49.977001 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-28 04:35:49.977006 | orchestrator | Saturday 28 March 2026 04:35:12 +0000 (0:00:01.836) 0:02:26.375 ******** 2026-03-28 04:35:49.977011 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:49.977016 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:49.977022 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:49.977027 | orchestrator | 2026-03-28 04:35:49.977032 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-28 04:35:49.977037 | orchestrator | Saturday 28 March 2026 04:35:13 +0000 (0:00:01.312) 0:02:27.687 ******** 2026-03-28 04:35:49.977042 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:35:49.977047 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:35:49.977052 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:35:49.977057 | orchestrator | 2026-03-28 04:35:49.977062 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-28 04:35:49.977067 | orchestrator | Saturday 28 March 2026 04:35:14 +0000 (0:00:01.298) 0:02:28.986 ******** 2026-03-28 04:35:49.977072 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:49.977077 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:49.977083 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:49.977088 | orchestrator | 2026-03-28 04:35:49.977093 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-28 04:35:49.977098 | orchestrator | Saturday 28 March 2026 04:35:16 +0000 (0:00:01.765) 0:02:30.752 ******** 2026-03-28 04:35:49.977104 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:35:49.977109 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:35:49.977114 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:35:49.977119 | orchestrator | 2026-03-28 04:35:49.977125 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-28 04:35:49.977131 | orchestrator | Saturday 28 March 2026 04:35:18 +0000 (0:00:01.675) 0:02:32.428 ******** 2026-03-28 04:35:49.977136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 04:35:49.977142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 04:35:49.977147 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 04:35:49.977152 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 04:35:49.977157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 04:35:49.977163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 04:35:49.977168 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 04:35:49.977174 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 04:35:49.977179 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-28 04:35:49.977184 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 04:35:49.977189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 04:35:49.977199 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-28 04:35:49.977218 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 04:35:49.977223 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 04:35:49.977228 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 04:35:49.977234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 04:35:49.977239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 04:35:49.977279 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 04:35:49.977285 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 04:35:49.977290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 04:35:49.977295 | orchestrator | 2026-03-28 04:35:49.977300 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-28 04:35:49.977305 | orchestrator | 2026-03-28 04:35:49.977310 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-28 04:35:49.977316 | orchestrator | Saturday 28 March 2026 04:35:22 +0000 (0:00:04.388) 0:02:36.816 ******** 2026-03-28 04:35:49.977321 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977326 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977331 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977336 | orchestrator | 2026-03-28 04:35:49.977341 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-28 04:35:49.977346 | orchestrator | Saturday 28 March 2026 04:35:24 +0000 (0:00:01.385) 0:02:38.202 ******** 2026-03-28 04:35:49.977352 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977357 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977362 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977367 | orchestrator | 2026-03-28 04:35:49.977372 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-28 04:35:49.977378 | orchestrator | Saturday 28 March 2026 04:35:25 +0000 (0:00:01.637) 0:02:39.839 ******** 2026-03-28 04:35:49.977383 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977388 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977393 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977398 | orchestrator | 2026-03-28 04:35:49.977403 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-28 04:35:49.977409 | orchestrator | Saturday 28 March 2026 04:35:27 +0000 (0:00:01.541) 0:02:41.381 ******** 2026-03-28 04:35:49.977414 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:35:49.977419 | orchestrator | 2026-03-28 04:35:49.977424 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-28 04:35:49.977430 | orchestrator | Saturday 28 March 2026 04:35:28 +0000 (0:00:01.647) 0:02:43.029 ******** 2026-03-28 04:35:49.977435 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:35:49.977440 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:35:49.977445 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:35:49.977450 | orchestrator | 2026-03-28 04:35:49.977455 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-28 04:35:49.977461 | orchestrator | Saturday 28 March 2026 04:35:30 +0000 (0:00:01.573) 0:02:44.602 ******** 2026-03-28 04:35:49.977466 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:35:49.977471 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:35:49.977476 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:35:49.977481 | orchestrator | 2026-03-28 04:35:49.977486 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-28 04:35:49.977492 | orchestrator | Saturday 28 March 2026 04:35:31 +0000 (0:00:01.433) 0:02:46.036 ******** 2026-03-28 04:35:49.977501 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:35:49.977507 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:35:49.977512 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:35:49.977517 | orchestrator | 2026-03-28 04:35:49.977522 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-28 04:35:49.977527 | orchestrator | Saturday 28 March 2026 04:35:33 +0000 (0:00:01.347) 0:02:47.384 ******** 2026-03-28 04:35:49.977532 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977537 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977542 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977548 | orchestrator | 2026-03-28 04:35:49.977553 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-28 04:35:49.977564 | orchestrator | Saturday 28 March 2026 04:35:34 +0000 (0:00:01.654) 0:02:49.039 ******** 2026-03-28 04:35:49.977569 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977574 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977579 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977584 | orchestrator | 2026-03-28 04:35:49.977589 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-28 04:35:49.977595 | orchestrator | Saturday 28 March 2026 04:35:37 +0000 (0:00:02.482) 0:02:51.521 ******** 2026-03-28 04:35:49.977600 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:35:49.977605 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:35:49.977610 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:35:49.977615 | orchestrator | 2026-03-28 04:35:49.977620 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-28 04:35:49.977625 | orchestrator | Saturday 28 March 2026 04:35:39 +0000 (0:00:02.310) 0:02:53.831 ******** 2026-03-28 04:35:49.977631 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:35:49.977636 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:35:49.977641 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:35:49.977646 | orchestrator | 2026-03-28 04:35:49.977651 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 04:35:49.977657 | orchestrator | 2026-03-28 04:35:49.977662 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 04:35:49.977667 | orchestrator | Saturday 28 March 2026 04:35:47 +0000 (0:00:08.055) 0:03:01.886 ******** 2026-03-28 04:35:49.977672 | orchestrator | ok: [testbed-manager] 2026-03-28 04:35:49.977677 | orchestrator | 2026-03-28 04:35:49.977682 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 04:35:49.977691 | orchestrator | Saturday 28 March 2026 04:35:49 +0000 (0:00:02.114) 0:03:04.001 ******** 2026-03-28 04:36:58.922168 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922315 | orchestrator | 2026-03-28 04:36:58.922326 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 04:36:58.922334 | orchestrator | Saturday 28 March 2026 04:35:51 +0000 (0:00:01.456) 0:03:05.457 ******** 2026-03-28 04:36:58.922341 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 04:36:58.922347 | orchestrator | 2026-03-28 04:36:58.922353 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 04:36:58.922359 | orchestrator | Saturday 28 March 2026 04:35:53 +0000 (0:00:01.601) 0:03:07.058 ******** 2026-03-28 04:36:58.922365 | orchestrator | changed: [testbed-manager] 2026-03-28 04:36:58.922371 | orchestrator | 2026-03-28 04:36:58.922377 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 04:36:58.922383 | orchestrator | Saturday 28 March 2026 04:35:54 +0000 (0:00:01.907) 0:03:08.965 ******** 2026-03-28 04:36:58.922389 | orchestrator | changed: [testbed-manager] 2026-03-28 04:36:58.922404 | orchestrator | 2026-03-28 04:36:58.922411 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 04:36:58.922425 | orchestrator | Saturday 28 March 2026 04:35:56 +0000 (0:00:01.616) 0:03:10.582 ******** 2026-03-28 04:36:58.922431 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 04:36:58.922437 | orchestrator | 2026-03-28 04:36:58.922460 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 04:36:58.922466 | orchestrator | Saturday 28 March 2026 04:35:59 +0000 (0:00:02.940) 0:03:13.523 ******** 2026-03-28 04:36:58.922472 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 04:36:58.922478 | orchestrator | 2026-03-28 04:36:58.922483 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 04:36:58.922489 | orchestrator | Saturday 28 March 2026 04:36:01 +0000 (0:00:01.870) 0:03:15.393 ******** 2026-03-28 04:36:58.922506 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922512 | orchestrator | 2026-03-28 04:36:58.922517 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 04:36:58.922523 | orchestrator | Saturday 28 March 2026 04:36:02 +0000 (0:00:01.430) 0:03:16.824 ******** 2026-03-28 04:36:58.922529 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922534 | orchestrator | 2026-03-28 04:36:58.922541 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-28 04:36:58.922552 | orchestrator | 2026-03-28 04:36:58.922562 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-28 04:36:58.922572 | orchestrator | Saturday 28 March 2026 04:36:04 +0000 (0:00:01.580) 0:03:18.405 ******** 2026-03-28 04:36:58.922582 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922591 | orchestrator | 2026-03-28 04:36:58.922601 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-28 04:36:58.922611 | orchestrator | Saturday 28 March 2026 04:36:05 +0000 (0:00:01.139) 0:03:19.544 ******** 2026-03-28 04:36:58.922621 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 04:36:58.922632 | orchestrator | 2026-03-28 04:36:58.922641 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-28 04:36:58.922647 | orchestrator | Saturday 28 March 2026 04:36:06 +0000 (0:00:01.460) 0:03:21.005 ******** 2026-03-28 04:36:58.922653 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922658 | orchestrator | 2026-03-28 04:36:58.922665 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-28 04:36:58.922672 | orchestrator | Saturday 28 March 2026 04:36:08 +0000 (0:00:01.948) 0:03:22.953 ******** 2026-03-28 04:36:58.922678 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922685 | orchestrator | 2026-03-28 04:36:58.922691 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-28 04:36:58.922698 | orchestrator | Saturday 28 March 2026 04:36:11 +0000 (0:00:02.737) 0:03:25.690 ******** 2026-03-28 04:36:58.922704 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922711 | orchestrator | 2026-03-28 04:36:58.922717 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-28 04:36:58.922724 | orchestrator | Saturday 28 March 2026 04:36:13 +0000 (0:00:01.441) 0:03:27.131 ******** 2026-03-28 04:36:58.922730 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922737 | orchestrator | 2026-03-28 04:36:58.922744 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-28 04:36:58.922750 | orchestrator | Saturday 28 March 2026 04:36:14 +0000 (0:00:01.464) 0:03:28.596 ******** 2026-03-28 04:36:58.922757 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922763 | orchestrator | 2026-03-28 04:36:58.922770 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-28 04:36:58.922776 | orchestrator | Saturday 28 March 2026 04:36:16 +0000 (0:00:01.562) 0:03:30.158 ******** 2026-03-28 04:36:58.922783 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922790 | orchestrator | 2026-03-28 04:36:58.922796 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-28 04:36:58.922803 | orchestrator | Saturday 28 March 2026 04:36:18 +0000 (0:00:02.457) 0:03:32.615 ******** 2026-03-28 04:36:58.922810 | orchestrator | ok: [testbed-manager] 2026-03-28 04:36:58.922816 | orchestrator | 2026-03-28 04:36:58.922823 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-28 04:36:58.922846 | orchestrator | 2026-03-28 04:36:58.922853 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-28 04:36:58.922860 | orchestrator | Saturday 28 March 2026 04:36:20 +0000 (0:00:01.697) 0:03:34.313 ******** 2026-03-28 04:36:58.922867 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:36:58.922873 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:36:58.922880 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:36:58.922886 | orchestrator | 2026-03-28 04:36:58.922892 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-28 04:36:58.922899 | orchestrator | Saturday 28 March 2026 04:36:21 +0000 (0:00:01.352) 0:03:35.666 ******** 2026-03-28 04:36:58.922905 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:36:58.922912 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:36:58.922918 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:36:58.922925 | orchestrator | 2026-03-28 04:36:58.922944 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-28 04:36:58.922951 | orchestrator | Saturday 28 March 2026 04:36:23 +0000 (0:00:01.631) 0:03:37.297 ******** 2026-03-28 04:36:58.922958 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:36:58.922964 | orchestrator | 2026-03-28 04:36:58.922971 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-28 04:36:58.922978 | orchestrator | Saturday 28 March 2026 04:36:25 +0000 (0:00:01.778) 0:03:39.076 ******** 2026-03-28 04:36:58.922984 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.922991 | orchestrator | 2026-03-28 04:36:58.922997 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-28 04:36:58.923004 | orchestrator | Saturday 28 March 2026 04:36:26 +0000 (0:00:01.840) 0:03:40.916 ******** 2026-03-28 04:36:58.923010 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923017 | orchestrator | 2026-03-28 04:36:58.923024 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-28 04:36:58.923031 | orchestrator | Saturday 28 March 2026 04:36:28 +0000 (0:00:01.906) 0:03:42.823 ******** 2026-03-28 04:36:58.923038 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:36:58.923044 | orchestrator | 2026-03-28 04:36:58.923049 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-28 04:36:58.923055 | orchestrator | Saturday 28 March 2026 04:36:29 +0000 (0:00:01.176) 0:03:43.999 ******** 2026-03-28 04:36:58.923061 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923067 | orchestrator | 2026-03-28 04:36:58.923073 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-28 04:36:58.923078 | orchestrator | Saturday 28 March 2026 04:36:32 +0000 (0:00:02.052) 0:03:46.051 ******** 2026-03-28 04:36:58.923084 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923090 | orchestrator | 2026-03-28 04:36:58.923096 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-28 04:36:58.923102 | orchestrator | Saturday 28 March 2026 04:36:34 +0000 (0:00:02.231) 0:03:48.283 ******** 2026-03-28 04:36:58.923108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923113 | orchestrator | 2026-03-28 04:36:58.923119 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-28 04:36:58.923125 | orchestrator | Saturday 28 March 2026 04:36:35 +0000 (0:00:01.179) 0:03:49.462 ******** 2026-03-28 04:36:58.923131 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923137 | orchestrator | 2026-03-28 04:36:58.923142 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-28 04:36:58.923148 | orchestrator | Saturday 28 March 2026 04:36:36 +0000 (0:00:01.121) 0:03:50.584 ******** 2026-03-28 04:36:58.923154 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-28 04:36:58.923160 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-28 04:36:58.923167 | orchestrator | } 2026-03-28 04:36:58.923173 | orchestrator | 2026-03-28 04:36:58.923233 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-28 04:36:58.923241 | orchestrator | Saturday 28 March 2026 04:36:37 +0000 (0:00:01.198) 0:03:51.783 ******** 2026-03-28 04:36:58.923246 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:36:58.923252 | orchestrator | 2026-03-28 04:36:58.923258 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-28 04:36:58.923264 | orchestrator | Saturday 28 March 2026 04:36:38 +0000 (0:00:01.115) 0:03:52.899 ******** 2026-03-28 04:36:58.923269 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-28 04:36:58.923275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-28 04:36:58.923281 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-28 04:36:58.923287 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-28 04:36:58.923292 | orchestrator | 2026-03-28 04:36:58.923298 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-28 04:36:58.923304 | orchestrator | Saturday 28 March 2026 04:36:44 +0000 (0:00:05.499) 0:03:58.399 ******** 2026-03-28 04:36:58.923310 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923315 | orchestrator | 2026-03-28 04:36:58.923321 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-28 04:36:58.923327 | orchestrator | Saturday 28 March 2026 04:36:46 +0000 (0:00:02.459) 0:04:00.858 ******** 2026-03-28 04:36:58.923333 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923339 | orchestrator | 2026-03-28 04:36:58.923344 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-28 04:36:58.923350 | orchestrator | Saturday 28 March 2026 04:36:49 +0000 (0:00:02.569) 0:04:03.428 ******** 2026-03-28 04:36:58.923356 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 04:36:58.923362 | orchestrator | 2026-03-28 04:36:58.923374 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-28 04:36:58.923380 | orchestrator | Saturday 28 March 2026 04:36:53 +0000 (0:00:04.086) 0:04:07.514 ******** 2026-03-28 04:36:58.923386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:36:58.923391 | orchestrator | 2026-03-28 04:36:58.923397 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-28 04:36:58.923403 | orchestrator | Saturday 28 March 2026 04:36:54 +0000 (0:00:01.158) 0:04:08.673 ******** 2026-03-28 04:36:58.923409 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-28 04:36:58.923415 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-28 04:36:58.923421 | orchestrator | 2026-03-28 04:36:58.923427 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-28 04:36:58.923432 | orchestrator | Saturday 28 March 2026 04:36:57 +0000 (0:00:02.877) 0:04:11.551 ******** 2026-03-28 04:36:58.923438 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:36:58.923449 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:37:24.924057 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:37:24.924218 | orchestrator | 2026-03-28 04:37:24.924238 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-28 04:37:24.924251 | orchestrator | Saturday 28 March 2026 04:36:58 +0000 (0:00:01.404) 0:04:12.955 ******** 2026-03-28 04:37:24.924263 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:37:24.924275 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:37:24.924286 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:37:24.924296 | orchestrator | 2026-03-28 04:37:24.924307 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-28 04:37:24.924318 | orchestrator | 2026-03-28 04:37:24.924330 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-28 04:37:24.924341 | orchestrator | Saturday 28 March 2026 04:37:01 +0000 (0:00:02.094) 0:04:15.050 ******** 2026-03-28 04:37:24.924352 | orchestrator | ok: [testbed-manager] 2026-03-28 04:37:24.924389 | orchestrator | 2026-03-28 04:37:24.924400 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-28 04:37:24.924411 | orchestrator | Saturday 28 March 2026 04:37:02 +0000 (0:00:01.158) 0:04:16.209 ******** 2026-03-28 04:37:24.924422 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 04:37:24.924433 | orchestrator | 2026-03-28 04:37:24.924444 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-28 04:37:24.924455 | orchestrator | Saturday 28 March 2026 04:37:03 +0000 (0:00:01.541) 0:04:17.750 ******** 2026-03-28 04:37:24.924465 | orchestrator | ok: [testbed-manager] 2026-03-28 04:37:24.924476 | orchestrator | 2026-03-28 04:37:24.924487 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-28 04:37:24.924498 | orchestrator | 2026-03-28 04:37:24.924508 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-28 04:37:24.924533 | orchestrator | Saturday 28 March 2026 04:37:08 +0000 (0:00:04.780) 0:04:22.531 ******** 2026-03-28 04:37:24.924544 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:37:24.924555 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:37:24.924566 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:37:24.924577 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:37:24.924589 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:37:24.924601 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:37:24.924613 | orchestrator | 2026-03-28 04:37:24.924626 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-28 04:37:24.924638 | orchestrator | Saturday 28 March 2026 04:37:10 +0000 (0:00:01.904) 0:04:24.436 ******** 2026-03-28 04:37:24.924650 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 04:37:24.924663 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 04:37:24.924675 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 04:37:24.924688 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 04:37:24.924700 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 04:37:24.924712 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 04:37:24.924723 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 04:37:24.924736 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 04:37:24.924749 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 04:37:24.924762 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 04:37:24.924773 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 04:37:24.924785 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 04:37:24.924797 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 04:37:24.924809 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 04:37:24.924821 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 04:37:24.924833 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 04:37:24.924845 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 04:37:24.924858 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 04:37:24.924870 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 04:37:24.924882 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 04:37:24.924903 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 04:37:24.924915 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 04:37:24.924926 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 04:37:24.924937 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 04:37:24.924947 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 04:37:24.924958 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 04:37:24.924987 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 04:37:24.924999 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 04:37:24.925009 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 04:37:24.925020 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 04:37:24.925031 | orchestrator | 2026-03-28 04:37:24.925042 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-28 04:37:24.925052 | orchestrator | Saturday 28 March 2026 04:37:20 +0000 (0:00:10.171) 0:04:34.607 ******** 2026-03-28 04:37:24.925063 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:37:24.925074 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:37:24.925085 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:37:24.925095 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:37:24.925106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:37:24.925117 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:37:24.925127 | orchestrator | 2026-03-28 04:37:24.925139 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-28 04:37:24.925150 | orchestrator | Saturday 28 March 2026 04:37:22 +0000 (0:00:01.870) 0:04:36.478 ******** 2026-03-28 04:37:24.925239 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:37:24.925252 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:37:24.925263 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:37:24.925350 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:37:24.925363 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:37:24.925374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:37:24.925385 | orchestrator | 2026-03-28 04:37:24.925396 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:37:24.925407 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:37:24.925421 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 04:37:24.925432 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 04:37:24.925443 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 04:37:24.925454 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 04:37:24.925465 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 04:37:24.925476 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 04:37:24.925486 | orchestrator | 2026-03-28 04:37:24.925497 | orchestrator | 2026-03-28 04:37:24.925508 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:37:24.925530 | orchestrator | Saturday 28 March 2026 04:37:24 +0000 (0:00:02.457) 0:04:38.935 ******** 2026-03-28 04:37:24.925541 | orchestrator | =============================================================================== 2026-03-28 04:37:24.925552 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.91s 2026-03-28 04:37:24.925562 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.42s 2026-03-28 04:37:24.925574 | orchestrator | Manage labels ---------------------------------------------------------- 10.17s 2026-03-28 04:37:24.925585 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.06s 2026-03-28 04:37:24.925596 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.50s 2026-03-28 04:37:24.925607 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.78s 2026-03-28 04:37:24.925617 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.44s 2026-03-28 04:37:24.925628 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.39s 2026-03-28 04:37:24.925639 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.09s 2026-03-28 04:37:24.925650 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.32s 2026-03-28 04:37:24.925661 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.94s 2026-03-28 04:37:24.925672 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.88s 2026-03-28 04:37:24.925682 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.81s 2026-03-28 04:37:24.925693 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.74s 2026-03-28 04:37:24.925704 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.68s 2026-03-28 04:37:24.925715 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.62s 2026-03-28 04:37:24.925726 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.57s 2026-03-28 04:37:24.925737 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.49s 2026-03-28 04:37:24.925757 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 2.48s 2026-03-28 04:37:25.380276 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.46s 2026-03-28 04:37:25.727275 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-28 04:37:25.727376 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-28 04:37:25.737508 | orchestrator | + set -e 2026-03-28 04:37:25.737567 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 04:37:25.737581 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 04:37:25.737593 | orchestrator | ++ INTERACTIVE=false 2026-03-28 04:37:25.737604 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 04:37:25.737615 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 04:37:25.737626 | orchestrator | + osism apply openstackclient 2026-03-28 04:37:38.051776 | orchestrator | 2026-03-28 04:37:38 | INFO  | Task cb0782a9-a1ef-4068-a89c-d5f34ebdbc14 (openstackclient) was prepared for execution. 2026-03-28 04:37:38.051902 | orchestrator | 2026-03-28 04:37:38 | INFO  | It takes a moment until task cb0782a9-a1ef-4068-a89c-d5f34ebdbc14 (openstackclient) has been started and output is visible here. 2026-03-28 04:38:12.926856 | orchestrator | 2026-03-28 04:38:12.926942 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-28 04:38:12.926951 | orchestrator | 2026-03-28 04:38:12.926959 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-28 04:38:12.926966 | orchestrator | Saturday 28 March 2026 04:37:44 +0000 (0:00:01.988) 0:00:01.988 ******** 2026-03-28 04:38:12.926974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-28 04:38:12.927003 | orchestrator | 2026-03-28 04:38:12.927010 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-28 04:38:12.927029 | orchestrator | Saturday 28 March 2026 04:37:46 +0000 (0:00:01.836) 0:00:03.824 ******** 2026-03-28 04:38:12.927035 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-28 04:38:12.927044 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-28 04:38:12.927051 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-28 04:38:12.927055 | orchestrator | 2026-03-28 04:38:12.927059 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-28 04:38:12.927063 | orchestrator | Saturday 28 March 2026 04:37:48 +0000 (0:00:02.352) 0:00:06.177 ******** 2026-03-28 04:38:12.927067 | orchestrator | changed: [testbed-manager] 2026-03-28 04:38:12.927071 | orchestrator | 2026-03-28 04:38:12.927075 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-28 04:38:12.927079 | orchestrator | Saturday 28 March 2026 04:37:50 +0000 (0:00:02.334) 0:00:08.512 ******** 2026-03-28 04:38:12.927083 | orchestrator | ok: [testbed-manager] 2026-03-28 04:38:12.927087 | orchestrator | 2026-03-28 04:38:12.927091 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-28 04:38:12.927095 | orchestrator | Saturday 28 March 2026 04:37:53 +0000 (0:00:02.150) 0:00:10.663 ******** 2026-03-28 04:38:12.927099 | orchestrator | ok: [testbed-manager] 2026-03-28 04:38:12.927103 | orchestrator | 2026-03-28 04:38:12.927107 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-28 04:38:12.927110 | orchestrator | Saturday 28 March 2026 04:37:54 +0000 (0:00:01.903) 0:00:12.566 ******** 2026-03-28 04:38:12.927135 | orchestrator | ok: [testbed-manager] 2026-03-28 04:38:12.927139 | orchestrator | 2026-03-28 04:38:12.927143 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-28 04:38:12.927146 | orchestrator | Saturday 28 March 2026 04:37:56 +0000 (0:00:01.424) 0:00:13.990 ******** 2026-03-28 04:38:12.927151 | orchestrator | changed: [testbed-manager] 2026-03-28 04:38:12.927155 | orchestrator | 2026-03-28 04:38:12.927159 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-28 04:38:12.927163 | orchestrator | Saturday 28 March 2026 04:38:07 +0000 (0:00:10.852) 0:00:24.843 ******** 2026-03-28 04:38:12.927166 | orchestrator | changed: [testbed-manager] 2026-03-28 04:38:12.927170 | orchestrator | 2026-03-28 04:38:12.927174 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-28 04:38:12.927178 | orchestrator | Saturday 28 March 2026 04:38:09 +0000 (0:00:01.932) 0:00:26.775 ******** 2026-03-28 04:38:12.927181 | orchestrator | changed: [testbed-manager] 2026-03-28 04:38:12.927185 | orchestrator | 2026-03-28 04:38:12.927189 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-28 04:38:12.927193 | orchestrator | Saturday 28 March 2026 04:38:10 +0000 (0:00:01.601) 0:00:28.377 ******** 2026-03-28 04:38:12.927197 | orchestrator | ok: [testbed-manager] 2026-03-28 04:38:12.927200 | orchestrator | 2026-03-28 04:38:12.927204 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:38:12.927208 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 04:38:12.927213 | orchestrator | 2026-03-28 04:38:12.927217 | orchestrator | 2026-03-28 04:38:12.927221 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:38:12.927225 | orchestrator | Saturday 28 March 2026 04:38:12 +0000 (0:00:01.838) 0:00:30.215 ******** 2026-03-28 04:38:12.927228 | orchestrator | =============================================================================== 2026-03-28 04:38:12.927232 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.85s 2026-03-28 04:38:12.927236 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.35s 2026-03-28 04:38:12.927240 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.33s 2026-03-28 04:38:12.927248 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.15s 2026-03-28 04:38:12.927252 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.93s 2026-03-28 04:38:12.927256 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.90s 2026-03-28 04:38:12.927260 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.84s 2026-03-28 04:38:12.927263 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.84s 2026-03-28 04:38:12.927267 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.60s 2026-03-28 04:38:12.927271 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.42s 2026-03-28 04:38:13.248689 | orchestrator | + osism apply -a upgrade common 2026-03-28 04:38:15.382923 | orchestrator | 2026-03-28 04:38:15 | INFO  | Task d0675eb7-6a8b-4010-b699-fe5c65df462b (common) was prepared for execution. 2026-03-28 04:38:15.383026 | orchestrator | 2026-03-28 04:38:15 | INFO  | It takes a moment until task d0675eb7-6a8b-4010-b699-fe5c65df462b (common) has been started and output is visible here. 2026-03-28 04:38:34.649136 | orchestrator | 2026-03-28 04:38:34.649250 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 04:38:34.649268 | orchestrator | 2026-03-28 04:38:34.649280 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 04:38:34.649291 | orchestrator | Saturday 28 March 2026 04:38:21 +0000 (0:00:02.321) 0:00:02.321 ******** 2026-03-28 04:38:34.649302 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:38:34.649323 | orchestrator | 2026-03-28 04:38:34.649343 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 04:38:34.649364 | orchestrator | Saturday 28 March 2026 04:38:25 +0000 (0:00:03.585) 0:00:05.906 ******** 2026-03-28 04:38:34.649403 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649423 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649440 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649459 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649479 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649497 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649516 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649535 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649555 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649575 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:38:34.649594 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649614 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649634 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649655 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649676 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649696 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649717 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:38:34.649738 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649787 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649810 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649830 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:38:34.649850 | orchestrator | 2026-03-28 04:38:34.649871 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 04:38:34.649891 | orchestrator | Saturday 28 March 2026 04:38:29 +0000 (0:00:03.673) 0:00:09.580 ******** 2026-03-28 04:38:34.649911 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:38:34.649931 | orchestrator | 2026-03-28 04:38:34.649950 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 04:38:34.649969 | orchestrator | Saturday 28 March 2026 04:38:32 +0000 (0:00:02.889) 0:00:12.470 ******** 2026-03-28 04:38:34.649992 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650200 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650224 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650292 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:34.650530 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:34.650572 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:34.650622 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802071 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802186 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802221 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802229 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802254 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802263 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802270 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802291 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802298 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:37.802324 | orchestrator | 2026-03-28 04:38:37.802332 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 04:38:37.802338 | orchestrator | Saturday 28 March 2026 04:38:36 +0000 (0:00:04.876) 0:00:17.346 ******** 2026-03-28 04:38:37.802347 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:37.802356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:37.802363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:37.802370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:37.802383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.406813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.406966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:38:40.407025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:40.407043 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:38:40.407060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407091 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:38:40.407210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:40.407234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:40.407291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:40.407323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407340 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:38:40.407356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:40.407372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:40.407449 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:38:40.407476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955234 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:38:41.955339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955357 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:38:41.955369 | orchestrator | 2026-03-28 04:38:41.955381 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 04:38:41.955393 | orchestrator | Saturday 28 March 2026 04:38:40 +0000 (0:00:03.476) 0:00:20.822 ******** 2026-03-28 04:38:41.955407 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:41.955422 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:41.955464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955489 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:38:41.955501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:41.955551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:41.955564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:41.955600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:41.955664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.230835 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:38:57.230950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:38:57.230966 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:38:57.230979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.230994 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:38:57.231007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:57.231022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.231034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:38:57.231062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.231075 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:38:57.231130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.231180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:38:57.231199 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:38:57.231216 | orchestrator | 2026-03-28 04:38:57.231228 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-28 04:38:57.231240 | orchestrator | Saturday 28 March 2026 04:38:44 +0000 (0:00:03.659) 0:00:24.482 ******** 2026-03-28 04:38:57.231251 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:38:57.231279 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:38:57.231291 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:38:57.231302 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:38:57.231313 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:38:57.231323 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:38:57.231334 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:38:57.231345 | orchestrator | 2026-03-28 04:38:57.231356 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 04:38:57.231367 | orchestrator | Saturday 28 March 2026 04:38:46 +0000 (0:00:02.414) 0:00:26.897 ******** 2026-03-28 04:38:57.231378 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:38:57.231389 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:38:57.231400 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:38:57.231411 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:38:57.231422 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:38:57.231432 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:38:57.231443 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:38:57.231454 | orchestrator | 2026-03-28 04:38:57.231465 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 04:38:57.231476 | orchestrator | Saturday 28 March 2026 04:38:48 +0000 (0:00:02.321) 0:00:29.219 ******** 2026-03-28 04:38:57.231486 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:38:57.231497 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:38:57.231508 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:38:57.231519 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:38:57.231529 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:38:57.231540 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:38:57.231551 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:38:57.231562 | orchestrator | 2026-03-28 04:38:57.231572 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-28 04:38:57.231583 | orchestrator | Saturday 28 March 2026 04:38:50 +0000 (0:00:02.162) 0:00:31.381 ******** 2026-03-28 04:38:57.231594 | orchestrator | changed: [testbed-manager] 2026-03-28 04:38:57.231605 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:38:57.231615 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:38:57.231626 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:38:57.231637 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:38:57.231648 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:38:57.231658 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:38:57.231669 | orchestrator | 2026-03-28 04:38:57.231680 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 04:38:57.231691 | orchestrator | Saturday 28 March 2026 04:38:54 +0000 (0:00:03.186) 0:00:34.568 ******** 2026-03-28 04:38:57.231711 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:57.231729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:57.231741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:57.231753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:57.231772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:59.174937 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:59.175059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:38:59.175211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175269 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:38:59.175405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:19.953658 | orchestrator | 2026-03-28 04:39:19.953769 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 04:39:19.953786 | orchestrator | Saturday 28 March 2026 04:38:59 +0000 (0:00:05.027) 0:00:39.596 ******** 2026-03-28 04:39:19.953798 | orchestrator | [WARNING]: Skipped 2026-03-28 04:39:19.953809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 04:39:19.953820 | orchestrator | to this access issue: 2026-03-28 04:39:19.953850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 04:39:19.953861 | orchestrator | directory 2026-03-28 04:39:19.953870 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:39:19.953881 | orchestrator | 2026-03-28 04:39:19.953891 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 04:39:19.953901 | orchestrator | Saturday 28 March 2026 04:39:01 +0000 (0:00:02.353) 0:00:41.949 ******** 2026-03-28 04:39:19.953911 | orchestrator | [WARNING]: Skipped 2026-03-28 04:39:19.953920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 04:39:19.953930 | orchestrator | to this access issue: 2026-03-28 04:39:19.953940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 04:39:19.953949 | orchestrator | directory 2026-03-28 04:39:19.953959 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:39:19.953968 | orchestrator | 2026-03-28 04:39:19.953978 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 04:39:19.953988 | orchestrator | Saturday 28 March 2026 04:39:03 +0000 (0:00:02.052) 0:00:44.002 ******** 2026-03-28 04:39:19.953997 | orchestrator | [WARNING]: Skipped 2026-03-28 04:39:19.954007 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 04:39:19.954112 | orchestrator | to this access issue: 2026-03-28 04:39:19.954127 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 04:39:19.954137 | orchestrator | directory 2026-03-28 04:39:19.954147 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:39:19.954156 | orchestrator | 2026-03-28 04:39:19.954166 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 04:39:19.954176 | orchestrator | Saturday 28 March 2026 04:39:05 +0000 (0:00:01.852) 0:00:45.855 ******** 2026-03-28 04:39:19.954185 | orchestrator | [WARNING]: Skipped 2026-03-28 04:39:19.954195 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 04:39:19.954206 | orchestrator | to this access issue: 2026-03-28 04:39:19.954228 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 04:39:19.954239 | orchestrator | directory 2026-03-28 04:39:19.954251 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:39:19.954262 | orchestrator | 2026-03-28 04:39:19.954287 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 04:39:19.954299 | orchestrator | Saturday 28 March 2026 04:39:07 +0000 (0:00:01.840) 0:00:47.695 ******** 2026-03-28 04:39:19.954309 | orchestrator | changed: [testbed-manager] 2026-03-28 04:39:19.954320 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:39:19.954331 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:39:19.954342 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:39:19.954352 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:39:19.954363 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:39:19.954374 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:39:19.954385 | orchestrator | 2026-03-28 04:39:19.954396 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 04:39:19.954407 | orchestrator | Saturday 28 March 2026 04:39:11 +0000 (0:00:03.929) 0:00:51.625 ******** 2026-03-28 04:39:19.954418 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954431 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954442 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954452 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954463 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954482 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954493 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:39:19.954504 | orchestrator | 2026-03-28 04:39:19.954515 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 04:39:19.954526 | orchestrator | Saturday 28 March 2026 04:39:14 +0000 (0:00:03.012) 0:00:54.637 ******** 2026-03-28 04:39:19.954537 | orchestrator | ok: [testbed-manager] 2026-03-28 04:39:19.954548 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:39:19.954559 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:39:19.954568 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:39:19.954578 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:39:19.954587 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:39:19.954597 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:39:19.954606 | orchestrator | 2026-03-28 04:39:19.954616 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 04:39:19.954625 | orchestrator | Saturday 28 March 2026 04:39:16 +0000 (0:00:02.801) 0:00:57.438 ******** 2026-03-28 04:39:19.954657 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:19.954672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:19.954683 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:19.954698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:19.954710 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:19.954729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:19.954739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:19.954749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:19.954766 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:29.168049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:29.168177 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:29.168201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:29.168225 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:29.168235 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:29.168243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:29.168250 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:29.168273 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:29.168281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:29.168288 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:29.168296 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:29.168309 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:29.168384 | orchestrator | 2026-03-28 04:39:29.168395 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 04:39:29.168403 | orchestrator | Saturday 28 March 2026 04:39:19 +0000 (0:00:02.923) 0:01:00.362 ******** 2026-03-28 04:39:29.168410 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168418 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168425 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168431 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168438 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168444 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168451 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:39:29.168458 | orchestrator | 2026-03-28 04:39:29.168465 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 04:39:29.168471 | orchestrator | Saturday 28 March 2026 04:39:23 +0000 (0:00:03.167) 0:01:03.529 ******** 2026-03-28 04:39:29.168478 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168485 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168491 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168506 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168513 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168519 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168526 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:39:29.168533 | orchestrator | 2026-03-28 04:39:29.168539 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-28 04:39:29.168546 | orchestrator | Saturday 28 March 2026 04:39:26 +0000 (0:00:03.562) 0:01:07.092 ******** 2026-03-28 04:39:29.168560 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:39:31.050793 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050886 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:31.050910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:39:34.093331 | orchestrator | 2026-03-28 04:39:34.093344 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-28 04:39:34.093357 | orchestrator | Saturday 28 March 2026 04:39:31 +0000 (0:00:04.386) 0:01:11.478 ******** 2026-03-28 04:39:34.093369 | orchestrator | changed: [testbed-manager] => { 2026-03-28 04:39:34.093385 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093396 | orchestrator | } 2026-03-28 04:39:34.093408 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:39:34.093418 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093429 | orchestrator | } 2026-03-28 04:39:34.093440 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:39:34.093451 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093461 | orchestrator | } 2026-03-28 04:39:34.093472 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:39:34.093483 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093494 | orchestrator | } 2026-03-28 04:39:34.093504 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 04:39:34.093515 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093526 | orchestrator | } 2026-03-28 04:39:34.093536 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 04:39:34.093547 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093558 | orchestrator | } 2026-03-28 04:39:34.093569 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 04:39:34.093579 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:39:34.093611 | orchestrator | } 2026-03-28 04:39:34.093623 | orchestrator | 2026-03-28 04:39:34.093634 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:39:34.093645 | orchestrator | Saturday 28 March 2026 04:39:33 +0000 (0:00:02.274) 0:01:13.753 ******** 2026-03-28 04:39:34.093658 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:34.093693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093722 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:39:34.093741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:34.093755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:34.093814 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:39:34.093828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:34.093855 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:39:34.093877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:43.219836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.219958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.219986 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:39:43.220030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:43.220059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220159 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:39:43.220178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:43.220198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220263 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:39:43.220290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:39:43.220304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:39:43.220337 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:39:43.220350 | orchestrator | 2026-03-28 04:39:43.220364 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220378 | orchestrator | Saturday 28 March 2026 04:39:36 +0000 (0:00:03.242) 0:01:16.996 ******** 2026-03-28 04:39:43.220391 | orchestrator | 2026-03-28 04:39:43.220420 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220443 | orchestrator | Saturday 28 March 2026 04:39:37 +0000 (0:00:00.580) 0:01:17.576 ******** 2026-03-28 04:39:43.220456 | orchestrator | 2026-03-28 04:39:43.220468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220480 | orchestrator | Saturday 28 March 2026 04:39:37 +0000 (0:00:00.452) 0:01:18.029 ******** 2026-03-28 04:39:43.220492 | orchestrator | 2026-03-28 04:39:43.220504 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220516 | orchestrator | Saturday 28 March 2026 04:39:38 +0000 (0:00:00.461) 0:01:18.491 ******** 2026-03-28 04:39:43.220528 | orchestrator | 2026-03-28 04:39:43.220541 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220554 | orchestrator | Saturday 28 March 2026 04:39:38 +0000 (0:00:00.454) 0:01:18.945 ******** 2026-03-28 04:39:43.220565 | orchestrator | 2026-03-28 04:39:43.220578 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220590 | orchestrator | Saturday 28 March 2026 04:39:39 +0000 (0:00:00.853) 0:01:19.799 ******** 2026-03-28 04:39:43.220604 | orchestrator | 2026-03-28 04:39:43.220624 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:39:43.220643 | orchestrator | Saturday 28 March 2026 04:39:39 +0000 (0:00:00.438) 0:01:20.238 ******** 2026-03-28 04:39:43.220663 | orchestrator | 2026-03-28 04:39:43.220682 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 04:39:43.220699 | orchestrator | Saturday 28 March 2026 04:39:40 +0000 (0:00:00.866) 0:01:21.105 ******** 2026-03-28 04:39:43.220752 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_49rx3t3g/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_49rx3t3g/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_49rx3t3g/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:46.697801 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__k5r3pf4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__k5r3pf4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__k5r3pf4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:46.697909 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kmh5x1tc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kmh5x1tc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kmh5x1tc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:46.697939 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ekaf268z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ekaf268z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ekaf268z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:46.697957 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jjzt8l8_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jjzt8l8_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jjzt8l8_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:47.494173 | orchestrator | 2026-03-28 04:39:47 | INFO  | Task b94d54ef-2d83-4cfd-aa84-2b365fefd2ce (common) was prepared for execution. 2026-03-28 04:39:47.494290 | orchestrator | 2026-03-28 04:39:47 | INFO  | It takes a moment until task b94d54ef-2d83-4cfd-aa84-2b365fefd2ce (common) has been started and output is visible here. 2026-03-28 04:39:57.337633 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_95npm4ud/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_95npm4ud/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_95npm4ud/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:57.337802 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_j87fagh4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_j87fagh4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_j87fagh4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-28 04:39:57.337823 | orchestrator | 2026-03-28 04:39:57.337836 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:39:57.337865 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337878 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337888 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337899 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337909 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337920 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337930 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-28 04:39:57.337941 | orchestrator | 2026-03-28 04:39:57.337965 | orchestrator | 2026-03-28 04:39:57.337982 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:39:57.337993 | orchestrator | Saturday 28 March 2026 04:39:46 +0000 (0:00:06.028) 0:01:27.133 ******** 2026-03-28 04:39:57.338109 | orchestrator | =============================================================================== 2026-03-28 04:39:57.338122 | orchestrator | common : Restart fluentd container -------------------------------------- 6.03s 2026-03-28 04:39:57.338132 | orchestrator | common : Copying over config.json files for services -------------------- 5.03s 2026-03-28 04:39:57.338145 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.88s 2026-03-28 04:39:57.338156 | orchestrator | service-check-containers : common | Check containers -------------------- 4.39s 2026-03-28 04:39:57.338169 | orchestrator | common : Flush handlers ------------------------------------------------- 4.11s 2026-03-28 04:39:57.338180 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.93s 2026-03-28 04:39:57.338192 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.67s 2026-03-28 04:39:57.338205 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.66s 2026-03-28 04:39:57.338218 | orchestrator | common : include_tasks -------------------------------------------------- 3.59s 2026-03-28 04:39:57.338230 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.56s 2026-03-28 04:39:57.338242 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.48s 2026-03-28 04:39:57.338255 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.24s 2026-03-28 04:39:57.338264 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.19s 2026-03-28 04:39:57.338274 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.17s 2026-03-28 04:39:57.338284 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.01s 2026-03-28 04:39:57.338295 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.92s 2026-03-28 04:39:57.338305 | orchestrator | common : include_tasks -------------------------------------------------- 2.89s 2026-03-28 04:39:57.338314 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.80s 2026-03-28 04:39:57.338326 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.41s 2026-03-28 04:39:57.338337 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.35s 2026-03-28 04:39:57.338347 | orchestrator | 2026-03-28 04:39:57.338359 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 04:39:57.338369 | orchestrator | 2026-03-28 04:39:57.338380 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 04:39:57.338392 | orchestrator | Saturday 28 March 2026 04:39:53 +0000 (0:00:01.868) 0:00:01.868 ******** 2026-03-28 04:39:57.338404 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:39:57.338416 | orchestrator | 2026-03-28 04:39:57.338440 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 04:40:06.361494 | orchestrator | Saturday 28 March 2026 04:39:57 +0000 (0:00:03.500) 0:00:05.369 ******** 2026-03-28 04:40:06.361591 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361606 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361618 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361629 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361640 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361651 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361663 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361694 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361705 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361716 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 04:40:06.361727 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361738 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361749 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361760 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361772 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361782 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361793 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 04:40:06.361804 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361815 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361826 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361849 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 04:40:06.361862 | orchestrator | 2026-03-28 04:40:06.361874 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 04:40:06.361885 | orchestrator | Saturday 28 March 2026 04:40:00 +0000 (0:00:03.491) 0:00:08.860 ******** 2026-03-28 04:40:06.361896 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 04:40:06.361909 | orchestrator | 2026-03-28 04:40:06.361920 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 04:40:06.361931 | orchestrator | Saturday 28 March 2026 04:40:03 +0000 (0:00:03.005) 0:00:11.866 ******** 2026-03-28 04:40:06.361944 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.361959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.361971 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.362005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.362101 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.362117 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.362137 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:06.362150 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:06.362164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:06.362178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:06.362210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043173 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043283 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043331 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043345 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043357 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043391 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043403 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043432 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:09.043456 | orchestrator | 2026-03-28 04:40:09.043469 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 04:40:09.043481 | orchestrator | Saturday 28 March 2026 04:40:08 +0000 (0:00:04.366) 0:00:16.232 ******** 2026-03-28 04:40:09.043500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:09.043515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:09.043527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:09.043540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:09.043559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:09.043580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:11.178487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.178622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:40:11.178654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.178678 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:40:11.178699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:11.178722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.178742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.178797 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:40:11.178819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:11.178840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:11.178885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.178968 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:40:11.178993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.179020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.179042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.179111 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:40:11.179150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.179171 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:40:11.179193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:11.179216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:11.179266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571159 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:40:14.571294 | orchestrator | 2026-03-28 04:40:14.571323 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 04:40:14.571344 | orchestrator | Saturday 28 March 2026 04:40:11 +0000 (0:00:02.964) 0:00:19.197 ******** 2026-03-28 04:40:14.571368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571542 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571626 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:40:14.571646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571666 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:40:14.571687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:14.571764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571784 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:40:14.571804 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:40:14.571824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:14.571855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981718 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:40:26.981733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:40:26.981745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981765 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:40:26.981775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:26.981785 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:40:26.981795 | orchestrator | 2026-03-28 04:40:26.981806 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-28 04:40:26.981842 | orchestrator | Saturday 28 March 2026 04:40:14 +0000 (0:00:03.409) 0:00:22.606 ******** 2026-03-28 04:40:26.981852 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:40:26.981862 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:40:26.981872 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:40:26.981882 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:40:26.981891 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:40:26.981901 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:40:26.981910 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:40:26.981920 | orchestrator | 2026-03-28 04:40:26.981930 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 04:40:26.981940 | orchestrator | Saturday 28 March 2026 04:40:16 +0000 (0:00:02.309) 0:00:24.915 ******** 2026-03-28 04:40:26.981949 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:40:26.981959 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:40:26.981968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:40:26.981978 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:40:26.981988 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:40:26.981997 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:40:26.982093 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:40:26.982108 | orchestrator | 2026-03-28 04:40:26.982119 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 04:40:26.982130 | orchestrator | Saturday 28 March 2026 04:40:19 +0000 (0:00:02.239) 0:00:27.155 ******** 2026-03-28 04:40:26.982141 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:40:26.982152 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:40:26.982162 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:40:26.982171 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:40:26.982181 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:40:26.982190 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:40:26.982200 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:40:26.982209 | orchestrator | 2026-03-28 04:40:26.982219 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-28 04:40:26.982234 | orchestrator | Saturday 28 March 2026 04:40:21 +0000 (0:00:01.973) 0:00:29.129 ******** 2026-03-28 04:40:26.982244 | orchestrator | ok: [testbed-manager] 2026-03-28 04:40:26.982255 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:40:26.982265 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:40:26.982274 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:40:26.982284 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:40:26.982294 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:40:26.982303 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:40:26.982313 | orchestrator | 2026-03-28 04:40:26.982322 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 04:40:26.982332 | orchestrator | Saturday 28 March 2026 04:40:23 +0000 (0:00:02.906) 0:00:32.036 ******** 2026-03-28 04:40:26.982342 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:26.982354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:26.982364 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:26.982375 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:26.982385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:26.982410 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042618 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:30.042633 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042644 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:30.042655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042685 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042698 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042727 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042750 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042760 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042771 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042781 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042806 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042817 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:30.042828 | orchestrator | 2026-03-28 04:40:30.042839 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 04:40:30.042850 | orchestrator | Saturday 28 March 2026 04:40:28 +0000 (0:00:05.009) 0:00:37.045 ******** 2026-03-28 04:40:30.042860 | orchestrator | [WARNING]: Skipped 2026-03-28 04:40:30.042876 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 04:40:50.306000 | orchestrator | to this access issue: 2026-03-28 04:40:50.306166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 04:40:50.306180 | orchestrator | directory 2026-03-28 04:40:50.306188 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:40:50.306197 | orchestrator | 2026-03-28 04:40:50.306204 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 04:40:50.306213 | orchestrator | Saturday 28 March 2026 04:40:31 +0000 (0:00:02.505) 0:00:39.550 ******** 2026-03-28 04:40:50.306220 | orchestrator | [WARNING]: Skipped 2026-03-28 04:40:50.306241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 04:40:50.306249 | orchestrator | to this access issue: 2026-03-28 04:40:50.306257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 04:40:50.306264 | orchestrator | directory 2026-03-28 04:40:50.306271 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:40:50.306279 | orchestrator | 2026-03-28 04:40:50.306286 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 04:40:50.306292 | orchestrator | Saturday 28 March 2026 04:40:33 +0000 (0:00:01.883) 0:00:41.434 ******** 2026-03-28 04:40:50.306299 | orchestrator | [WARNING]: Skipped 2026-03-28 04:40:50.306306 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 04:40:50.306313 | orchestrator | to this access issue: 2026-03-28 04:40:50.306321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 04:40:50.306328 | orchestrator | directory 2026-03-28 04:40:50.306334 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:40:50.306342 | orchestrator | 2026-03-28 04:40:50.306349 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 04:40:50.306356 | orchestrator | Saturday 28 March 2026 04:40:35 +0000 (0:00:01.939) 0:00:43.374 ******** 2026-03-28 04:40:50.306363 | orchestrator | [WARNING]: Skipped 2026-03-28 04:40:50.306370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 04:40:50.306376 | orchestrator | to this access issue: 2026-03-28 04:40:50.306383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 04:40:50.306408 | orchestrator | directory 2026-03-28 04:40:50.306416 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 04:40:50.306423 | orchestrator | 2026-03-28 04:40:50.306429 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 04:40:50.306437 | orchestrator | Saturday 28 March 2026 04:40:37 +0000 (0:00:01.816) 0:00:45.191 ******** 2026-03-28 04:40:50.306444 | orchestrator | ok: [testbed-manager] 2026-03-28 04:40:50.306452 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:40:50.306459 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:40:50.306466 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:40:50.306473 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:40:50.306480 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:40:50.306487 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:40:50.306494 | orchestrator | 2026-03-28 04:40:50.306502 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 04:40:50.306509 | orchestrator | Saturday 28 March 2026 04:40:40 +0000 (0:00:03.770) 0:00:48.961 ******** 2026-03-28 04:40:50.306518 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306527 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306535 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306543 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306551 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306557 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306563 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 04:40:50.306570 | orchestrator | 2026-03-28 04:40:50.306575 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 04:40:50.306581 | orchestrator | Saturday 28 March 2026 04:40:44 +0000 (0:00:03.239) 0:00:52.201 ******** 2026-03-28 04:40:50.306587 | orchestrator | ok: [testbed-manager] 2026-03-28 04:40:50.306593 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:40:50.306599 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:40:50.306605 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:40:50.306611 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:40:50.306617 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:40:50.306623 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:40:50.306630 | orchestrator | 2026-03-28 04:40:50.306638 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 04:40:50.306645 | orchestrator | Saturday 28 March 2026 04:40:46 +0000 (0:00:02.844) 0:00:55.046 ******** 2026-03-28 04:40:50.306657 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:50.306689 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:50.306705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:50.306713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:50.306723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:50.306752 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:50.306760 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:50.306769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:50.306784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:59.453513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:59.453644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:59.453666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:59.453674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:59.453681 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:59.453688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:59.453695 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:40:59.453727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:40:59.453735 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:59.453742 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:59.453749 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:59.453756 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:40:59.453763 | orchestrator | 2026-03-28 04:40:59.453770 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 04:40:59.453778 | orchestrator | Saturday 28 March 2026 04:40:50 +0000 (0:00:03.284) 0:00:58.330 ******** 2026-03-28 04:40:59.453784 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453791 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453797 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453803 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453809 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453816 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453823 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 04:40:59.453829 | orchestrator | 2026-03-28 04:40:59.453835 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 04:40:59.453842 | orchestrator | Saturday 28 March 2026 04:40:53 +0000 (0:00:03.257) 0:01:01.587 ******** 2026-03-28 04:40:59.453848 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453854 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453865 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453871 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453877 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453884 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453890 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 04:40:59.453896 | orchestrator | 2026-03-28 04:40:59.453902 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-28 04:40:59.453908 | orchestrator | Saturday 28 March 2026 04:40:56 +0000 (0:00:03.313) 0:01:04.901 ******** 2026-03-28 04:40:59.453924 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449854 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.449906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 04:41:01.449984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.450003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.450117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.450141 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.450163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.450197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.451076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:01.451135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 04:41:04.218661 | orchestrator | 2026-03-28 04:41:04.218675 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-28 04:41:04.218687 | orchestrator | Saturday 28 March 2026 04:41:01 +0000 (0:00:04.580) 0:01:09.481 ******** 2026-03-28 04:41:04.218714 | orchestrator | changed: [testbed-manager] => { 2026-03-28 04:41:04.218727 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218738 | orchestrator | } 2026-03-28 04:41:04.218749 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:41:04.218760 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218771 | orchestrator | } 2026-03-28 04:41:04.218782 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:41:04.218793 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218804 | orchestrator | } 2026-03-28 04:41:04.218815 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:41:04.218825 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218836 | orchestrator | } 2026-03-28 04:41:04.218847 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 04:41:04.218858 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218869 | orchestrator | } 2026-03-28 04:41:04.218880 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 04:41:04.218900 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218919 | orchestrator | } 2026-03-28 04:41:04.218937 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 04:41:04.218955 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:41:04.218974 | orchestrator | } 2026-03-28 04:41:04.218992 | orchestrator | 2026-03-28 04:41:04.219012 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:41:04.219059 | orchestrator | Saturday 28 March 2026 04:41:03 +0000 (0:00:02.166) 0:01:11.648 ******** 2026-03-28 04:41:04.219083 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:41:04.219133 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219163 | orchestrator | skipping: [testbed-manager] 2026-03-28 04:41:04.219206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:41:04.219221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219247 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:41:04.219261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:41:04.219280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:41:04.219316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:42:27.610354 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:42:27.610493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610523 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:42:27.610532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:42:27.610542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610561 | orchestrator | skipping: [testbed-node-3] 2026-03-28 04:42:27.610583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:42:27.610594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610637 | orchestrator | skipping: [testbed-node-4] 2026-03-28 04:42:27.610647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 04:42:27.610656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:42:27.610675 | orchestrator | skipping: [testbed-node-5] 2026-03-28 04:42:27.610684 | orchestrator | 2026-03-28 04:42:27.610693 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610702 | orchestrator | Saturday 28 March 2026 04:41:06 +0000 (0:00:03.006) 0:01:14.654 ******** 2026-03-28 04:42:27.610711 | orchestrator | 2026-03-28 04:42:27.610720 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610729 | orchestrator | Saturday 28 March 2026 04:41:07 +0000 (0:00:00.433) 0:01:15.088 ******** 2026-03-28 04:42:27.610737 | orchestrator | 2026-03-28 04:42:27.610746 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610754 | orchestrator | Saturday 28 March 2026 04:41:07 +0000 (0:00:00.433) 0:01:15.521 ******** 2026-03-28 04:42:27.610763 | orchestrator | 2026-03-28 04:42:27.610772 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610781 | orchestrator | Saturday 28 March 2026 04:41:07 +0000 (0:00:00.416) 0:01:15.938 ******** 2026-03-28 04:42:27.610789 | orchestrator | 2026-03-28 04:42:27.610798 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610806 | orchestrator | Saturday 28 March 2026 04:41:08 +0000 (0:00:00.481) 0:01:16.419 ******** 2026-03-28 04:42:27.610815 | orchestrator | 2026-03-28 04:42:27.610828 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610837 | orchestrator | Saturday 28 March 2026 04:41:09 +0000 (0:00:00.764) 0:01:17.184 ******** 2026-03-28 04:42:27.610846 | orchestrator | 2026-03-28 04:42:27.610855 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 04:42:27.610870 | orchestrator | Saturday 28 March 2026 04:41:09 +0000 (0:00:00.436) 0:01:17.620 ******** 2026-03-28 04:42:27.610878 | orchestrator | 2026-03-28 04:42:27.610889 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 04:42:27.610899 | orchestrator | Saturday 28 March 2026 04:41:10 +0000 (0:00:00.843) 0:01:18.463 ******** 2026-03-28 04:42:27.610909 | orchestrator | changed: [testbed-manager] 2026-03-28 04:42:27.610919 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:42:27.610929 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:42:27.610939 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:42:27.610948 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:42:27.610958 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:42:27.610968 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:42:27.611003 | orchestrator | 2026-03-28 04:42:27.611014 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-28 04:42:27.611025 | orchestrator | Saturday 28 March 2026 04:41:48 +0000 (0:00:38.233) 0:01:56.697 ******** 2026-03-28 04:42:27.611035 | orchestrator | changed: [testbed-manager] 2026-03-28 04:42:27.611045 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:42:27.611055 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:42:27.611065 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:42:27.611076 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:42:27.611086 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:42:27.611096 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:42:27.611106 | orchestrator | 2026-03-28 04:42:27.611122 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-28 04:42:43.236297 | orchestrator | Saturday 28 March 2026 04:42:27 +0000 (0:00:38.940) 0:02:35.638 ******** 2026-03-28 04:42:43.236417 | orchestrator | ok: [testbed-manager] 2026-03-28 04:42:43.236436 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:42:43.236449 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:42:43.236460 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:42:43.236471 | orchestrator | ok: [testbed-node-3] 2026-03-28 04:42:43.236482 | orchestrator | ok: [testbed-node-4] 2026-03-28 04:42:43.236492 | orchestrator | ok: [testbed-node-5] 2026-03-28 04:42:43.236503 | orchestrator | 2026-03-28 04:42:43.236515 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-28 04:42:43.236527 | orchestrator | Saturday 28 March 2026 04:42:30 +0000 (0:00:03.005) 0:02:38.643 ******** 2026-03-28 04:42:43.236538 | orchestrator | changed: [testbed-manager] 2026-03-28 04:42:43.236570 | orchestrator | changed: [testbed-node-3] 2026-03-28 04:42:43.236581 | orchestrator | changed: [testbed-node-4] 2026-03-28 04:42:43.236592 | orchestrator | changed: [testbed-node-5] 2026-03-28 04:42:43.236602 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:42:43.236613 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:42:43.236624 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:42:43.236635 | orchestrator | 2026-03-28 04:42:43.236646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:42:43.236657 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236669 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236681 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236691 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236702 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236713 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236750 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:42:43.236761 | orchestrator | 2026-03-28 04:42:43.236772 | orchestrator | 2026-03-28 04:42:43.236783 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:42:43.236794 | orchestrator | Saturday 28 March 2026 04:42:42 +0000 (0:00:12.084) 0:02:50.728 ******** 2026-03-28 04:42:43.236805 | orchestrator | =============================================================================== 2026-03-28 04:42:43.236816 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.94s 2026-03-28 04:42:43.236827 | orchestrator | common : Restart fluentd container ------------------------------------- 38.23s 2026-03-28 04:42:43.236839 | orchestrator | common : Restart cron container ---------------------------------------- 12.08s 2026-03-28 04:42:43.236852 | orchestrator | common : Copying over config.json files for services -------------------- 5.01s 2026-03-28 04:42:43.236863 | orchestrator | service-check-containers : common | Check containers -------------------- 4.58s 2026-03-28 04:42:43.236876 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.37s 2026-03-28 04:42:43.236889 | orchestrator | common : Flush handlers ------------------------------------------------- 3.81s 2026-03-28 04:42:43.236901 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.77s 2026-03-28 04:42:43.236913 | orchestrator | common : include_tasks -------------------------------------------------- 3.50s 2026-03-28 04:42:43.236940 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.49s 2026-03-28 04:42:43.236952 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.41s 2026-03-28 04:42:43.236983 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.31s 2026-03-28 04:42:43.236996 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.28s 2026-03-28 04:42:43.237008 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.26s 2026-03-28 04:42:43.237020 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.24s 2026-03-28 04:42:43.237033 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.01s 2026-03-28 04:42:43.237045 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.01s 2026-03-28 04:42:43.237058 | orchestrator | common : include_tasks -------------------------------------------------- 3.01s 2026-03-28 04:42:43.237070 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.96s 2026-03-28 04:42:43.237083 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.91s 2026-03-28 04:42:43.560827 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-28 04:42:45.635742 | orchestrator | 2026-03-28 04:42:45 | INFO  | Task bb64d89f-b14e-4330-b0e5-48911ba0ef6e (loadbalancer) was prepared for execution. 2026-03-28 04:42:45.635860 | orchestrator | 2026-03-28 04:42:45 | INFO  | It takes a moment until task bb64d89f-b14e-4330-b0e5-48911ba0ef6e (loadbalancer) has been started and output is visible here. 2026-03-28 04:43:20.088892 | orchestrator | 2026-03-28 04:43:20.089030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:43:20.089043 | orchestrator | 2026-03-28 04:43:20.089051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:43:20.089059 | orchestrator | Saturday 28 March 2026 04:42:51 +0000 (0:00:01.487) 0:00:01.487 ******** 2026-03-28 04:43:20.089066 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:20.089074 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:20.089081 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:20.089088 | orchestrator | 2026-03-28 04:43:20.089095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:43:20.089102 | orchestrator | Saturday 28 March 2026 04:42:53 +0000 (0:00:01.916) 0:00:03.404 ******** 2026-03-28 04:43:20.089127 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-28 04:43:20.089135 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-28 04:43:20.089142 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-28 04:43:20.089148 | orchestrator | 2026-03-28 04:43:20.089155 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-28 04:43:20.089162 | orchestrator | 2026-03-28 04:43:20.089169 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 04:43:20.089176 | orchestrator | Saturday 28 March 2026 04:42:56 +0000 (0:00:02.461) 0:00:05.866 ******** 2026-03-28 04:43:20.089183 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:43:20.089190 | orchestrator | 2026-03-28 04:43:20.089197 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-28 04:43:20.089204 | orchestrator | Saturday 28 March 2026 04:42:58 +0000 (0:00:02.195) 0:00:08.061 ******** 2026-03-28 04:43:20.089211 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:20.089218 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:20.089224 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:20.089231 | orchestrator | 2026-03-28 04:43:20.089238 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-28 04:43:20.089244 | orchestrator | Saturday 28 March 2026 04:43:00 +0000 (0:00:02.124) 0:00:10.185 ******** 2026-03-28 04:43:20.089253 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:20.089264 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:20.089275 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:20.089286 | orchestrator | 2026-03-28 04:43:20.089297 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-28 04:43:20.089308 | orchestrator | Saturday 28 March 2026 04:43:02 +0000 (0:00:02.160) 0:00:12.346 ******** 2026-03-28 04:43:20.089318 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:20.089328 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:20.089338 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:20.089348 | orchestrator | 2026-03-28 04:43:20.089359 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 04:43:20.089370 | orchestrator | Saturday 28 March 2026 04:43:04 +0000 (0:00:01.737) 0:00:14.083 ******** 2026-03-28 04:43:20.089380 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:43:20.089390 | orchestrator | 2026-03-28 04:43:20.089399 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-28 04:43:20.089408 | orchestrator | Saturday 28 March 2026 04:43:06 +0000 (0:00:01.940) 0:00:16.023 ******** 2026-03-28 04:43:20.089419 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:20.089430 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:20.089441 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:20.089454 | orchestrator | 2026-03-28 04:43:20.089464 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-28 04:43:20.089476 | orchestrator | Saturday 28 March 2026 04:43:08 +0000 (0:00:01.803) 0:00:17.827 ******** 2026-03-28 04:43:20.089488 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089500 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089512 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089540 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089551 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089561 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 04:43:20.089572 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 04:43:20.089595 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 04:43:20.089607 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 04:43:20.089620 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 04:43:20.089632 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 04:43:20.089660 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 04:43:20.089682 | orchestrator | 2026-03-28 04:43:20.089694 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 04:43:20.089705 | orchestrator | Saturday 28 March 2026 04:43:11 +0000 (0:00:03.137) 0:00:20.964 ******** 2026-03-28 04:43:20.089716 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-28 04:43:20.089727 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-28 04:43:20.089738 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-28 04:43:20.089877 | orchestrator | 2026-03-28 04:43:20.089897 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 04:43:20.089930 | orchestrator | Saturday 28 March 2026 04:43:13 +0000 (0:00:01.994) 0:00:22.959 ******** 2026-03-28 04:43:20.089944 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-28 04:43:20.089976 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-28 04:43:20.089990 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-28 04:43:20.090001 | orchestrator | 2026-03-28 04:43:20.090014 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 04:43:20.090083 | orchestrator | Saturday 28 March 2026 04:43:15 +0000 (0:00:02.206) 0:00:25.166 ******** 2026-03-28 04:43:20.090090 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-28 04:43:20.090097 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:43:20.090104 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-28 04:43:20.090111 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:43:20.090118 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-28 04:43:20.090124 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:43:20.090131 | orchestrator | 2026-03-28 04:43:20.090138 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-28 04:43:20.090144 | orchestrator | Saturday 28 March 2026 04:43:17 +0000 (0:00:01.833) 0:00:26.999 ******** 2026-03-28 04:43:20.090154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:20.090166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:20.090174 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:20.090199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:20.090207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:20.090225 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:31.254311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:31.254417 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:31.254432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:31.254462 | orchestrator | 2026-03-28 04:43:31.254474 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-28 04:43:31.254484 | orchestrator | Saturday 28 March 2026 04:43:20 +0000 (0:00:02.646) 0:00:29.646 ******** 2026-03-28 04:43:31.254494 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:31.254503 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:31.254512 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:31.254521 | orchestrator | 2026-03-28 04:43:31.254530 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-28 04:43:31.254539 | orchestrator | Saturday 28 March 2026 04:43:22 +0000 (0:00:01.988) 0:00:31.634 ******** 2026-03-28 04:43:31.254548 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-28 04:43:31.254557 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-28 04:43:31.254566 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-28 04:43:31.254575 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-28 04:43:31.254583 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-28 04:43:31.254604 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-28 04:43:31.254614 | orchestrator | 2026-03-28 04:43:31.254623 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-28 04:43:31.254632 | orchestrator | Saturday 28 March 2026 04:43:24 +0000 (0:00:02.761) 0:00:34.396 ******** 2026-03-28 04:43:31.254640 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:31.254673 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:31.254683 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:31.254692 | orchestrator | 2026-03-28 04:43:31.254700 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-28 04:43:31.254709 | orchestrator | Saturday 28 March 2026 04:43:27 +0000 (0:00:02.285) 0:00:36.682 ******** 2026-03-28 04:43:31.254718 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:43:31.254727 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:43:31.254735 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:43:31.254744 | orchestrator | 2026-03-28 04:43:31.254752 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-28 04:43:31.254761 | orchestrator | Saturday 28 March 2026 04:43:29 +0000 (0:00:02.336) 0:00:39.018 ******** 2026-03-28 04:43:31.254771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 04:43:31.254797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:43:31.254807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:31.254828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:31.254838 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:43:31.254847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 04:43:31.254861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:43:31.254871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:31.254881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:31.254890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:43:31.254905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 04:43:35.501361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:43:35.501464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:35.501482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:35.501496 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:43:35.501510 | orchestrator | 2026-03-28 04:43:35.501523 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-28 04:43:35.501536 | orchestrator | Saturday 28 March 2026 04:43:31 +0000 (0:00:01.780) 0:00:40.798 ******** 2026-03-28 04:43:35.501569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:35.501583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:35.501594 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:35.501649 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:35.501663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:35.501680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:35.501693 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:35.501705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:35.501717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:35.501745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:43:49.538497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd', '__omit_place_holder__e350b7573fceedabbc6c9aed02e0738b886723dd'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 04:43:49.538515 | orchestrator | 2026-03-28 04:43:49.538546 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-28 04:43:49.538568 | orchestrator | Saturday 28 March 2026 04:43:35 +0000 (0:00:04.252) 0:00:45.051 ******** 2026-03-28 04:43:49.538581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:43:49.538761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:49.538780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:49.538799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:43:49.538830 | orchestrator | 2026-03-28 04:43:49.538850 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-28 04:43:49.538868 | orchestrator | Saturday 28 March 2026 04:43:40 +0000 (0:00:04.856) 0:00:49.907 ******** 2026-03-28 04:43:49.538887 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 04:43:49.538907 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 04:43:49.538926 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 04:43:49.538945 | orchestrator | 2026-03-28 04:43:49.538994 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-28 04:43:49.539012 | orchestrator | Saturday 28 March 2026 04:43:43 +0000 (0:00:02.779) 0:00:52.686 ******** 2026-03-28 04:43:49.539031 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 04:43:49.539049 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 04:43:49.539069 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 04:43:49.539087 | orchestrator | 2026-03-28 04:43:49.539105 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-28 04:43:49.539122 | orchestrator | Saturday 28 March 2026 04:43:47 +0000 (0:00:04.476) 0:00:57.163 ******** 2026-03-28 04:43:49.539140 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:43:49.539159 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:43:49.539190 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:10.286409 | orchestrator | 2026-03-28 04:44:10.286531 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-28 04:44:10.286550 | orchestrator | Saturday 28 March 2026 04:43:49 +0000 (0:00:01.926) 0:00:59.089 ******** 2026-03-28 04:44:10.286563 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 04:44:10.286574 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 04:44:10.286585 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 04:44:10.286596 | orchestrator | 2026-03-28 04:44:10.286607 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-28 04:44:10.286619 | orchestrator | Saturday 28 March 2026 04:43:52 +0000 (0:00:03.071) 0:01:02.160 ******** 2026-03-28 04:44:10.286630 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 04:44:10.286642 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 04:44:10.286652 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 04:44:10.286663 | orchestrator | 2026-03-28 04:44:10.286675 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 04:44:10.286686 | orchestrator | Saturday 28 March 2026 04:43:55 +0000 (0:00:02.746) 0:01:04.907 ******** 2026-03-28 04:44:10.286714 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:44:10.286726 | orchestrator | 2026-03-28 04:44:10.286737 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-28 04:44:10.286748 | orchestrator | Saturday 28 March 2026 04:43:57 +0000 (0:00:01.933) 0:01:06.841 ******** 2026-03-28 04:44:10.286779 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-28 04:44:10.286792 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-28 04:44:10.286802 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-28 04:44:10.286813 | orchestrator | 2026-03-28 04:44:10.286824 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-28 04:44:10.286835 | orchestrator | Saturday 28 March 2026 04:44:00 +0000 (0:00:02.787) 0:01:09.629 ******** 2026-03-28 04:44:10.286846 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-28 04:44:10.286857 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-28 04:44:10.286868 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-28 04:44:10.286879 | orchestrator | 2026-03-28 04:44:10.286889 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-28 04:44:10.286900 | orchestrator | Saturday 28 March 2026 04:44:02 +0000 (0:00:02.647) 0:01:12.276 ******** 2026-03-28 04:44:10.286911 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:10.286923 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:10.286936 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:10.286948 | orchestrator | 2026-03-28 04:44:10.286960 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-28 04:44:10.286972 | orchestrator | Saturday 28 March 2026 04:44:04 +0000 (0:00:01.377) 0:01:13.654 ******** 2026-03-28 04:44:10.287014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:10.287027 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:10.287039 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:10.287052 | orchestrator | 2026-03-28 04:44:10.287065 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 04:44:10.287077 | orchestrator | Saturday 28 March 2026 04:44:06 +0000 (0:00:02.003) 0:01:15.658 ******** 2026-03-28 04:44:10.287094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287112 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287144 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287156 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287196 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:10.287209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:10.287221 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:10.287240 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:14.215705 | orchestrator | 2026-03-28 04:44:14.215810 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 04:44:14.215836 | orchestrator | Saturday 28 March 2026 04:44:10 +0000 (0:00:04.182) 0:01:19.841 ******** 2026-03-28 04:44:14.215859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 04:44:14.215939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:14.215964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:14.215986 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:14.216089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 04:44:14.216105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:14.216117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:14.216128 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:14.216167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 04:44:14.216203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:14.216231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:14.216250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:14.216269 | orchestrator | 2026-03-28 04:44:14.216288 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 04:44:14.216308 | orchestrator | Saturday 28 March 2026 04:44:11 +0000 (0:00:01.668) 0:01:21.509 ******** 2026-03-28 04:44:14.216328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 04:44:14.216347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:14.216368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:14.216389 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:14.216424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 04:44:25.941049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:25.941210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:25.941231 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:25.941246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 04:44:25.941259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:25.941271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:25.941283 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:25.941295 | orchestrator | 2026-03-28 04:44:25.941307 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-28 04:44:25.941319 | orchestrator | Saturday 28 March 2026 04:44:14 +0000 (0:00:02.261) 0:01:23.771 ******** 2026-03-28 04:44:25.941351 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 04:44:25.941364 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 04:44:25.941375 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 04:44:25.941386 | orchestrator | 2026-03-28 04:44:25.941397 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-28 04:44:25.941408 | orchestrator | Saturday 28 March 2026 04:44:16 +0000 (0:00:02.663) 0:01:26.434 ******** 2026-03-28 04:44:25.941419 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 04:44:25.941429 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 04:44:25.941441 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 04:44:25.941452 | orchestrator | 2026-03-28 04:44:25.941480 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-28 04:44:25.941492 | orchestrator | Saturday 28 March 2026 04:44:19 +0000 (0:00:02.542) 0:01:28.976 ******** 2026-03-28 04:44:25.941504 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 04:44:25.941515 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 04:44:25.941526 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 04:44:25.941537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 04:44:25.941548 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 04:44:25.941559 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:25.941570 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:25.941581 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 04:44:25.941595 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:25.941607 | orchestrator | 2026-03-28 04:44:25.941626 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-28 04:44:25.941639 | orchestrator | Saturday 28 March 2026 04:44:21 +0000 (0:00:02.579) 0:01:31.556 ******** 2026-03-28 04:44:25.941653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:25.941668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:25.941681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:44:25.941703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:25.941727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:29.759011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:44:29.759208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:29.759241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:29.759259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:44:29.759308 | orchestrator | 2026-03-28 04:44:29.759328 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-28 04:44:29.759346 | orchestrator | Saturday 28 March 2026 04:44:25 +0000 (0:00:03.940) 0:01:35.496 ******** 2026-03-28 04:44:29.759363 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:44:29.759381 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:44:29.759397 | orchestrator | } 2026-03-28 04:44:29.759413 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:44:29.759429 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:44:29.759444 | orchestrator | } 2026-03-28 04:44:29.759459 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:44:29.759476 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:44:29.759493 | orchestrator | } 2026-03-28 04:44:29.759510 | orchestrator | 2026-03-28 04:44:29.759527 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:44:29.759543 | orchestrator | Saturday 28 March 2026 04:44:27 +0000 (0:00:01.432) 0:01:36.928 ******** 2026-03-28 04:44:29.759561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 04:44:29.759605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:29.759625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:29.759643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:29.759672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 04:44:29.759689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:29.759721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:29.759739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:29.759756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 04:44:29.759772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:44:29.759801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:44:35.686766 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:35.686869 | orchestrator | 2026-03-28 04:44:35.686887 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-28 04:44:35.686900 | orchestrator | Saturday 28 March 2026 04:44:29 +0000 (0:00:02.384) 0:01:39.313 ******** 2026-03-28 04:44:35.686911 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:44:35.686923 | orchestrator | 2026-03-28 04:44:35.686934 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-28 04:44:35.686953 | orchestrator | Saturday 28 March 2026 04:44:31 +0000 (0:00:02.100) 0:01:41.413 ******** 2026-03-28 04:44:35.686969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:35.687007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:35.687022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:35.687034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:35.687062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:35.687080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:35.687100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:35.687112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:35.687153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:35.687167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:35.687185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.462758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.462894 | orchestrator | 2026-03-28 04:44:37.462912 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-28 04:44:37.462925 | orchestrator | Saturday 28 March 2026 04:44:36 +0000 (0:00:04.960) 0:01:46.374 ******** 2026-03-28 04:44:37.462939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:37.462955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:37.462968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.462979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.462991 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:37.463029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:37.463051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:37.463063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.463074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:37.463086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:37.463098 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:37.463109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 04:44:37.463244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.764883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.765031 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:52.765067 | orchestrator | 2026-03-28 04:44:52.765090 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-28 04:44:52.765109 | orchestrator | Saturday 28 March 2026 04:44:38 +0000 (0:00:01.734) 0:01:48.108 ******** 2026-03-28 04:44:52.765123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765152 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:52.765163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765185 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:52.765197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:44:52.765267 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:44:52.765279 | orchestrator | 2026-03-28 04:44:52.765290 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-28 04:44:52.765301 | orchestrator | Saturday 28 March 2026 04:44:40 +0000 (0:00:02.263) 0:01:50.372 ******** 2026-03-28 04:44:52.765313 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:44:52.765325 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:44:52.765336 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:44:52.765347 | orchestrator | 2026-03-28 04:44:52.765360 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-28 04:44:52.765373 | orchestrator | Saturday 28 March 2026 04:44:43 +0000 (0:00:02.389) 0:01:52.761 ******** 2026-03-28 04:44:52.765411 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:44:52.765425 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:44:52.765437 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:44:52.765450 | orchestrator | 2026-03-28 04:44:52.765463 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-28 04:44:52.765474 | orchestrator | Saturday 28 March 2026 04:44:46 +0000 (0:00:02.974) 0:01:55.735 ******** 2026-03-28 04:44:52.765485 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:44:52.765496 | orchestrator | 2026-03-28 04:44:52.765507 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-28 04:44:52.765518 | orchestrator | Saturday 28 March 2026 04:44:48 +0000 (0:00:01.898) 0:01:57.634 ******** 2026-03-28 04:44:52.765570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:52.765589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.765603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.765615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:52.765636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.765654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:44:52.765676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:44:54.461719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.461834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.461872 | orchestrator | 2026-03-28 04:44:54.461887 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-28 04:44:54.461902 | orchestrator | Saturday 28 March 2026 04:44:52 +0000 (0:00:04.682) 0:02:02.317 ******** 2026-03-28 04:44:54.461919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:54.461952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.461969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.461983 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:44:54.462076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:54.462097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.462124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.462139 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:44:54.462160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:44:54.462175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 04:44:54.462197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:45:11.329360 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:11.329443 | orchestrator | 2026-03-28 04:45:11.329451 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-28 04:45:11.329459 | orchestrator | Saturday 28 March 2026 04:44:54 +0000 (0:00:01.704) 0:02:04.022 ******** 2026-03-28 04:45:11.329465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329496 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:11.329501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329511 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:11.329516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:11.329526 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:11.329531 | orchestrator | 2026-03-28 04:45:11.329536 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-28 04:45:11.329541 | orchestrator | Saturday 28 March 2026 04:44:56 +0000 (0:00:01.899) 0:02:05.923 ******** 2026-03-28 04:45:11.329546 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:45:11.329552 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:45:11.329557 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:45:11.329562 | orchestrator | 2026-03-28 04:45:11.329566 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-28 04:45:11.329571 | orchestrator | Saturday 28 March 2026 04:44:58 +0000 (0:00:02.294) 0:02:08.218 ******** 2026-03-28 04:45:11.329576 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:45:11.329581 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:45:11.329586 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:45:11.329591 | orchestrator | 2026-03-28 04:45:11.329596 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-28 04:45:11.329601 | orchestrator | Saturday 28 March 2026 04:45:01 +0000 (0:00:03.010) 0:02:11.229 ******** 2026-03-28 04:45:11.329606 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:11.329611 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:11.329617 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:11.329622 | orchestrator | 2026-03-28 04:45:11.329627 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-28 04:45:11.329632 | orchestrator | Saturday 28 March 2026 04:45:03 +0000 (0:00:01.481) 0:02:12.710 ******** 2026-03-28 04:45:11.329637 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:45:11.329642 | orchestrator | 2026-03-28 04:45:11.329658 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-28 04:45:11.329663 | orchestrator | Saturday 28 March 2026 04:45:04 +0000 (0:00:01.801) 0:02:14.511 ******** 2026-03-28 04:45:11.329669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 04:45:11.329698 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 04:45:11.329708 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 04:45:11.329716 | orchestrator | 2026-03-28 04:45:11.329725 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-28 04:45:11.329734 | orchestrator | Saturday 28 March 2026 04:45:08 +0000 (0:00:03.700) 0:02:18.212 ******** 2026-03-28 04:45:11.329747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 04:45:11.329756 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:11.329764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 04:45:11.329778 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:11.329792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 04:45:23.961771 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:23.961890 | orchestrator | 2026-03-28 04:45:23.961908 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-28 04:45:23.961921 | orchestrator | Saturday 28 March 2026 04:45:11 +0000 (0:00:02.674) 0:02:20.886 ******** 2026-03-28 04:45:23.961935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.961950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.961963 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:23.961975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.961987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.961998 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:23.962085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.962100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-28 04:45:23.962133 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:23.962145 | orchestrator | 2026-03-28 04:45:23.962156 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-28 04:45:23.962168 | orchestrator | Saturday 28 March 2026 04:45:14 +0000 (0:00:02.984) 0:02:23.871 ******** 2026-03-28 04:45:23.962179 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:23.962190 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:23.962201 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:23.962212 | orchestrator | 2026-03-28 04:45:23.962223 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-28 04:45:23.962233 | orchestrator | Saturday 28 March 2026 04:45:15 +0000 (0:00:01.598) 0:02:25.469 ******** 2026-03-28 04:45:23.962244 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:23.962255 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:23.962267 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:23.962277 | orchestrator | 2026-03-28 04:45:23.962289 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-28 04:45:23.962300 | orchestrator | Saturday 28 March 2026 04:45:18 +0000 (0:00:02.384) 0:02:27.853 ******** 2026-03-28 04:45:23.962313 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:45:23.962325 | orchestrator | 2026-03-28 04:45:23.962338 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-28 04:45:23.962351 | orchestrator | Saturday 28 March 2026 04:45:20 +0000 (0:00:02.025) 0:02:29.879 ******** 2026-03-28 04:45:23.962412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:23.962432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:23.962453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:23.962475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:23.962488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:23.962507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.020898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.020999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:26.021074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021132 | orchestrator | 2026-03-28 04:45:26.021146 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-28 04:45:26.021160 | orchestrator | Saturday 28 March 2026 04:45:25 +0000 (0:00:04.805) 0:02:34.685 ******** 2026-03-28 04:45:26.021175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:26.021201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:26.021235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:31.486569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486711 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:31.486729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486816 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:31.486830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:31.486866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 04:45:31.486921 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:31.486933 | orchestrator | 2026-03-28 04:45:31.486945 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-28 04:45:31.486958 | orchestrator | Saturday 28 March 2026 04:45:27 +0000 (0:00:02.032) 0:02:36.717 ******** 2026-03-28 04:45:31.486971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.486987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.487001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:31.487012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.487026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.487038 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:31.487050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.487063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:45:31.487077 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:31.487088 | orchestrator | 2026-03-28 04:45:31.487101 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-28 04:45:31.487113 | orchestrator | Saturday 28 March 2026 04:45:29 +0000 (0:00:02.094) 0:02:38.811 ******** 2026-03-28 04:45:31.487126 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:45:31.487139 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:45:31.487151 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:45:31.487163 | orchestrator | 2026-03-28 04:45:31.487182 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-28 04:45:42.785102 | orchestrator | Saturday 28 March 2026 04:45:31 +0000 (0:00:02.234) 0:02:41.046 ******** 2026-03-28 04:45:42.785224 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:45:42.785243 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:45:42.785256 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:45:42.785268 | orchestrator | 2026-03-28 04:45:42.785280 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-28 04:45:42.785292 | orchestrator | Saturday 28 March 2026 04:45:34 +0000 (0:00:02.920) 0:02:43.967 ******** 2026-03-28 04:45:42.785303 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:42.785316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:42.785327 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:42.785338 | orchestrator | 2026-03-28 04:45:42.785349 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-28 04:45:42.785360 | orchestrator | Saturday 28 March 2026 04:45:36 +0000 (0:00:01.739) 0:02:45.707 ******** 2026-03-28 04:45:42.785371 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:42.785383 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:45:42.785395 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:45:42.785406 | orchestrator | 2026-03-28 04:45:42.785417 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-28 04:45:42.785428 | orchestrator | Saturday 28 March 2026 04:45:37 +0000 (0:00:01.328) 0:02:47.035 ******** 2026-03-28 04:45:42.785439 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:45:42.785449 | orchestrator | 2026-03-28 04:45:42.785460 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-28 04:45:42.785471 | orchestrator | Saturday 28 March 2026 04:45:39 +0000 (0:00:01.770) 0:02:48.806 ******** 2026-03-28 04:45:42.785566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:42.785590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:42.785604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:45:42.785639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:45:42.785671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:45:42.785685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:45:42.785705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:45:42.785718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:42.785733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:42.785762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:45:44.822608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:44.822622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:45:44.822695 | orchestrator | 2026-03-28 04:45:44.822708 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-28 04:45:44.822720 | orchestrator | Saturday 28 March 2026 04:45:44 +0000 (0:00:04.862) 0:02:53.668 ******** 2026-03-28 04:45:44.822740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:46.016494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:46.016684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.016711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.016731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.016778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.017604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.017646 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:45:46.017686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:46.017703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:46.017720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:45:46.017750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 04:45:46.017762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:45:46.017783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 04:46:01.337975 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:01.337988 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:01.338000 | orchestrator | 2026-03-28 04:46:01.338012 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-28 04:46:01.338087 | orchestrator | Saturday 28 March 2026 04:45:46 +0000 (0:00:01.913) 0:02:55.582 ******** 2026-03-28 04:46:01.338115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338144 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:01.338155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338178 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:01.338192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:01.338218 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:01.338232 | orchestrator | 2026-03-28 04:46:01.338245 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-28 04:46:01.338258 | orchestrator | Saturday 28 March 2026 04:45:48 +0000 (0:00:02.105) 0:02:57.687 ******** 2026-03-28 04:46:01.338271 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:01.338285 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:01.338297 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:01.338310 | orchestrator | 2026-03-28 04:46:01.338323 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-28 04:46:01.338336 | orchestrator | Saturday 28 March 2026 04:45:50 +0000 (0:00:02.342) 0:03:00.030 ******** 2026-03-28 04:46:01.338349 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:01.338362 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:01.338374 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:01.338386 | orchestrator | 2026-03-28 04:46:01.338399 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-28 04:46:01.338411 | orchestrator | Saturday 28 March 2026 04:45:53 +0000 (0:00:02.845) 0:03:02.875 ******** 2026-03-28 04:46:01.338425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:01.338438 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:01.338450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:01.338463 | orchestrator | 2026-03-28 04:46:01.338479 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-28 04:46:01.338500 | orchestrator | Saturday 28 March 2026 04:45:54 +0000 (0:00:01.377) 0:03:04.253 ******** 2026-03-28 04:46:01.338523 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:46:01.338538 | orchestrator | 2026-03-28 04:46:01.338557 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-28 04:46:01.338569 | orchestrator | Saturday 28 March 2026 04:45:56 +0000 (0:00:01.928) 0:03:06.181 ******** 2026-03-28 04:46:01.338631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 04:46:02.439840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:02.439937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 04:46:02.440003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:02.440016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 04:46:02.440044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:06.032695 | orchestrator | 2026-03-28 04:46:06.032799 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-28 04:46:06.032817 | orchestrator | Saturday 28 March 2026 04:46:02 +0000 (0:00:05.825) 0:03:12.006 ******** 2026-03-28 04:46:06.032835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 04:46:06.032894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:06.032910 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:06.032944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 04:46:06.032975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:06.032989 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:06.033011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 04:46:24.492450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 04:46:24.492569 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:24.492587 | orchestrator | 2026-03-28 04:46:24.492601 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-28 04:46:24.492614 | orchestrator | Saturday 28 March 2026 04:46:07 +0000 (0:00:04.686) 0:03:16.693 ******** 2026-03-28 04:46:24.492627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492655 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:24.492667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492782 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:24.492795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 04:46:24.492824 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:24.492836 | orchestrator | 2026-03-28 04:46:24.492847 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-28 04:46:24.492858 | orchestrator | Saturday 28 March 2026 04:46:11 +0000 (0:00:04.549) 0:03:21.242 ******** 2026-03-28 04:46:24.492869 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:24.492881 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:24.492892 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:24.492903 | orchestrator | 2026-03-28 04:46:24.492913 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-28 04:46:24.492924 | orchestrator | Saturday 28 March 2026 04:46:13 +0000 (0:00:02.169) 0:03:23.412 ******** 2026-03-28 04:46:24.492935 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:24.492946 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:24.492957 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:24.492968 | orchestrator | 2026-03-28 04:46:24.492979 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-28 04:46:24.492990 | orchestrator | Saturday 28 March 2026 04:46:16 +0000 (0:00:02.820) 0:03:26.233 ******** 2026-03-28 04:46:24.493001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:24.493012 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:24.493023 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:24.493034 | orchestrator | 2026-03-28 04:46:24.493045 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-28 04:46:24.493055 | orchestrator | Saturday 28 March 2026 04:46:18 +0000 (0:00:01.572) 0:03:27.806 ******** 2026-03-28 04:46:24.493066 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:46:24.493077 | orchestrator | 2026-03-28 04:46:24.493088 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-28 04:46:24.493098 | orchestrator | Saturday 28 March 2026 04:46:19 +0000 (0:00:01.655) 0:03:29.461 ******** 2026-03-28 04:46:24.493110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:46:24.493217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:46:41.721753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:46:41.721932 | orchestrator | 2026-03-28 04:46:41.721954 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-28 04:46:41.721968 | orchestrator | Saturday 28 March 2026 04:46:24 +0000 (0:00:04.585) 0:03:34.047 ******** 2026-03-28 04:46:41.722000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:46:41.722013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:41.722086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:46:41.722119 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:41.722131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:46:41.722143 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:41.722154 | orchestrator | 2026-03-28 04:46:41.722165 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-28 04:46:41.722177 | orchestrator | Saturday 28 March 2026 04:46:26 +0000 (0:00:01.864) 0:03:35.911 ******** 2026-03-28 04:46:41.722190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:41.722255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722281 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:41.722295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:46:41.722322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:41.722334 | orchestrator | 2026-03-28 04:46:41.722347 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-28 04:46:41.722360 | orchestrator | Saturday 28 March 2026 04:46:27 +0000 (0:00:01.603) 0:03:37.515 ******** 2026-03-28 04:46:41.722373 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:41.722387 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:41.722399 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:41.722411 | orchestrator | 2026-03-28 04:46:41.722431 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-28 04:46:41.722444 | orchestrator | Saturday 28 March 2026 04:46:30 +0000 (0:00:02.411) 0:03:39.926 ******** 2026-03-28 04:46:41.722457 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:41.722469 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:41.722481 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:41.722502 | orchestrator | 2026-03-28 04:46:41.722515 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-28 04:46:41.722527 | orchestrator | Saturday 28 March 2026 04:46:33 +0000 (0:00:03.111) 0:03:43.038 ******** 2026-03-28 04:46:41.722540 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:41.722553 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:41.722565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:41.722578 | orchestrator | 2026-03-28 04:46:41.722591 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-28 04:46:41.722603 | orchestrator | Saturday 28 March 2026 04:46:34 +0000 (0:00:01.419) 0:03:44.458 ******** 2026-03-28 04:46:41.722615 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:46:41.722628 | orchestrator | 2026-03-28 04:46:41.722640 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-28 04:46:41.722651 | orchestrator | Saturday 28 March 2026 04:46:36 +0000 (0:00:02.007) 0:03:46.465 ******** 2026-03-28 04:46:41.722675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 04:46:43.441568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 04:46:43.441720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 04:46:43.441740 | orchestrator | 2026-03-28 04:46:43.441754 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-28 04:46:43.441766 | orchestrator | Saturday 28 March 2026 04:46:41 +0000 (0:00:04.816) 0:03:51.282 ******** 2026-03-28 04:46:43.441782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 04:46:43.442116 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:43.442207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 04:46:52.398276 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:52.398442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 04:46:52.398478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:52.398511 | orchestrator | 2026-03-28 04:46:52.398534 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-28 04:46:52.398555 | orchestrator | Saturday 28 March 2026 04:46:43 +0000 (0:00:01.721) 0:03:53.004 ******** 2026-03-28 04:46:52.398578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.398621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.398692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 04:46:52.398729 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:52.398772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.398816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.398856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 04:46:52.398876 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:52.398922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.398961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 04:46:52.398981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 04:46:52.399001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 04:46:52.399034 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:52.399053 | orchestrator | 2026-03-28 04:46:52.399074 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-28 04:46:52.399094 | orchestrator | Saturday 28 March 2026 04:46:45 +0000 (0:00:02.123) 0:03:55.127 ******** 2026-03-28 04:46:52.399113 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:52.399133 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:52.399150 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:52.399168 | orchestrator | 2026-03-28 04:46:52.399186 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-28 04:46:52.399205 | orchestrator | Saturday 28 March 2026 04:46:47 +0000 (0:00:02.266) 0:03:57.394 ******** 2026-03-28 04:46:52.399223 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:46:52.399241 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:46:52.399258 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:46:52.399276 | orchestrator | 2026-03-28 04:46:52.399295 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-28 04:46:52.399313 | orchestrator | Saturday 28 March 2026 04:46:50 +0000 (0:00:02.885) 0:04:00.280 ******** 2026-03-28 04:46:52.399331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:46:52.399350 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:46:52.399370 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:46:52.399388 | orchestrator | 2026-03-28 04:46:52.399406 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-28 04:46:52.399437 | orchestrator | Saturday 28 March 2026 04:46:52 +0000 (0:00:01.448) 0:04:01.728 ******** 2026-03-28 04:46:52.399472 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:02.756459 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:02.756594 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:02.756613 | orchestrator | 2026-03-28 04:47:02.756627 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-28 04:47:02.756641 | orchestrator | Saturday 28 March 2026 04:46:53 +0000 (0:00:01.486) 0:04:03.215 ******** 2026-03-28 04:47:02.756652 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:47:02.756664 | orchestrator | 2026-03-28 04:47:02.756687 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-28 04:47:02.756700 | orchestrator | Saturday 28 March 2026 04:46:55 +0000 (0:00:02.144) 0:04:05.359 ******** 2026-03-28 04:47:02.756718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 04:47:02.756736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:02.756773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 04:47:02.756788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:02.756833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:02.756847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:02.756859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 04:47:02.756879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:02.756892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:02.756903 | orchestrator | 2026-03-28 04:47:02.756915 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-28 04:47:02.756928 | orchestrator | Saturday 28 March 2026 04:47:00 +0000 (0:00:04.933) 0:04:10.293 ******** 2026-03-28 04:47:02.756980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 04:47:04.510142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:04.510248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 04:47:04.510292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:04.510307 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:04.510321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:04.510348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:04.510360 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:04.510392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 04:47:04.510405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 04:47:04.510425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 04:47:04.510437 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:04.510448 | orchestrator | 2026-03-28 04:47:04.510461 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-28 04:47:04.510473 | orchestrator | Saturday 28 March 2026 04:47:02 +0000 (0:00:02.016) 0:04:12.310 ******** 2026-03-28 04:47:04.510486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:04.510524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510552 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:04.510566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 04:47:04.510594 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:04.510607 | orchestrator | 2026-03-28 04:47:04.510621 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-28 04:47:04.510642 | orchestrator | Saturday 28 March 2026 04:47:04 +0000 (0:00:01.758) 0:04:14.068 ******** 2026-03-28 04:47:20.073437 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:20.073561 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:20.073576 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:20.073612 | orchestrator | 2026-03-28 04:47:20.073643 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-28 04:47:20.073656 | orchestrator | Saturday 28 March 2026 04:47:06 +0000 (0:00:02.209) 0:04:16.278 ******** 2026-03-28 04:47:20.073667 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:20.073678 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:20.073689 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:20.073700 | orchestrator | 2026-03-28 04:47:20.073711 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-28 04:47:20.073722 | orchestrator | Saturday 28 March 2026 04:47:10 +0000 (0:00:03.311) 0:04:19.589 ******** 2026-03-28 04:47:20.073734 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:20.073746 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:20.073757 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:20.073768 | orchestrator | 2026-03-28 04:47:20.073779 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-28 04:47:20.073790 | orchestrator | Saturday 28 March 2026 04:47:11 +0000 (0:00:01.403) 0:04:20.993 ******** 2026-03-28 04:47:20.073801 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:47:20.073812 | orchestrator | 2026-03-28 04:47:20.073823 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-28 04:47:20.073834 | orchestrator | Saturday 28 March 2026 04:47:13 +0000 (0:00:01.906) 0:04:22.899 ******** 2026-03-28 04:47:20.073850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:20.073867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:20.073909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:20.073949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:20.073963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:20.073978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:20.073991 | orchestrator | 2026-03-28 04:47:20.074005 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-28 04:47:20.074089 | orchestrator | Saturday 28 March 2026 04:47:18 +0000 (0:00:05.014) 0:04:27.913 ******** 2026-03-28 04:47:20.074110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:20.074141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:33.441774 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:33.441876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:33.441890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:33.441898 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:33.441906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:33.441927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:47:33.441950 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:33.441958 | orchestrator | 2026-03-28 04:47:33.441965 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-28 04:47:33.441973 | orchestrator | Saturday 28 March 2026 04:47:20 +0000 (0:00:01.721) 0:04:29.634 ******** 2026-03-28 04:47:33.441991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:33.442060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442104 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:33.442112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:33.442125 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:33.442131 | orchestrator | 2026-03-28 04:47:33.442138 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-28 04:47:33.442145 | orchestrator | Saturday 28 March 2026 04:47:22 +0000 (0:00:01.999) 0:04:31.634 ******** 2026-03-28 04:47:33.442151 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:33.442159 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:33.442166 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:33.442172 | orchestrator | 2026-03-28 04:47:33.442178 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-28 04:47:33.442185 | orchestrator | Saturday 28 March 2026 04:47:24 +0000 (0:00:02.284) 0:04:33.919 ******** 2026-03-28 04:47:33.442191 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:33.442197 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:33.442204 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:33.442210 | orchestrator | 2026-03-28 04:47:33.442216 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-28 04:47:33.442223 | orchestrator | Saturday 28 March 2026 04:47:27 +0000 (0:00:03.014) 0:04:36.934 ******** 2026-03-28 04:47:33.442229 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:47:33.442242 | orchestrator | 2026-03-28 04:47:33.442249 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-28 04:47:33.442255 | orchestrator | Saturday 28 March 2026 04:47:29 +0000 (0:00:02.166) 0:04:39.100 ******** 2026-03-28 04:47:33.442267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:33.442275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:33.442291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.333812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.333902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:35.333937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.333961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.333970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.333994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:47:35.334002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.334010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.334075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:35.334099 | orchestrator | 2026-03-28 04:47:35.334109 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-28 04:47:35.334117 | orchestrator | Saturday 28 March 2026 04:47:34 +0000 (0:00:05.047) 0:04:44.148 ******** 2026-03-28 04:47:35.334131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:35.334146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.690720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.690864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.690929 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:38.690954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:38.690992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691074 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:38.691086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:47:38.691161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 04:47:38.691205 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:38.691216 | orchestrator | 2026-03-28 04:47:38.691228 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-28 04:47:38.691241 | orchestrator | Saturday 28 March 2026 04:47:36 +0000 (0:00:01.931) 0:04:46.079 ******** 2026-03-28 04:47:38.691253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:38.691268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:38.691280 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:38.691291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:38.691312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:54.055765 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:54.055874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:54.055912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:47:54.055927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:54.055938 | orchestrator | 2026-03-28 04:47:54.055949 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-28 04:47:54.055960 | orchestrator | Saturday 28 March 2026 04:47:38 +0000 (0:00:02.169) 0:04:48.249 ******** 2026-03-28 04:47:54.055970 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:54.055980 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:54.055990 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:54.056000 | orchestrator | 2026-03-28 04:47:54.056010 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-28 04:47:54.056020 | orchestrator | Saturday 28 March 2026 04:47:40 +0000 (0:00:02.260) 0:04:50.509 ******** 2026-03-28 04:47:54.056030 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:47:54.056039 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:47:54.056049 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:47:54.056058 | orchestrator | 2026-03-28 04:47:54.056068 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-28 04:47:54.056078 | orchestrator | Saturday 28 March 2026 04:47:43 +0000 (0:00:02.959) 0:04:53.469 ******** 2026-03-28 04:47:54.056088 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:47:54.056097 | orchestrator | 2026-03-28 04:47:54.056107 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-28 04:47:54.056116 | orchestrator | Saturday 28 March 2026 04:47:46 +0000 (0:00:02.531) 0:04:56.000 ******** 2026-03-28 04:47:54.056126 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 04:47:54.056136 | orchestrator | 2026-03-28 04:47:54.056145 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-28 04:47:54.056155 | orchestrator | Saturday 28 March 2026 04:47:50 +0000 (0:00:04.060) 0:05:00.061 ******** 2026-03-28 04:47:54.056237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:54.056279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:47:54.056292 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:54.056303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:54.056318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:47:54.056329 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:54.056348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:57.948941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:47:57.949011 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:47:57.949018 | orchestrator | 2026-03-28 04:47:57.949023 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-28 04:47:57.949028 | orchestrator | Saturday 28 March 2026 04:47:54 +0000 (0:00:03.543) 0:05:03.605 ******** 2026-03-28 04:47:57.949046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:57.949066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:47:57.949071 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:47:57.949086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:57.949091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:47:57.949096 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:47:57.949103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:47:57.949114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 04:48:14.627460 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:14.627582 | orchestrator | 2026-03-28 04:48:14.627598 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-28 04:48:14.627611 | orchestrator | Saturday 28 March 2026 04:47:57 +0000 (0:00:03.902) 0:05:07.507 ******** 2026-03-28 04:48:14.627625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627654 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:14.627683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627729 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:14.627742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 04:48:14.627764 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:14.627776 | orchestrator | 2026-03-28 04:48:14.627787 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-28 04:48:14.627799 | orchestrator | Saturday 28 March 2026 04:48:02 +0000 (0:00:04.353) 0:05:11.861 ******** 2026-03-28 04:48:14.627810 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:48:14.627839 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:48:14.627850 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:48:14.627861 | orchestrator | 2026-03-28 04:48:14.627872 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-28 04:48:14.627883 | orchestrator | Saturday 28 March 2026 04:48:05 +0000 (0:00:03.003) 0:05:14.865 ******** 2026-03-28 04:48:14.627894 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:14.627905 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:14.627916 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:14.627926 | orchestrator | 2026-03-28 04:48:14.627937 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-28 04:48:14.627948 | orchestrator | Saturday 28 March 2026 04:48:07 +0000 (0:00:02.660) 0:05:17.525 ******** 2026-03-28 04:48:14.627959 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:14.627970 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:14.627982 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:14.627995 | orchestrator | 2026-03-28 04:48:14.628007 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-28 04:48:14.628019 | orchestrator | Saturday 28 March 2026 04:48:09 +0000 (0:00:01.461) 0:05:18.987 ******** 2026-03-28 04:48:14.628032 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:48:14.628045 | orchestrator | 2026-03-28 04:48:14.628058 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-28 04:48:14.628070 | orchestrator | Saturday 28 March 2026 04:48:11 +0000 (0:00:02.220) 0:05:21.208 ******** 2026-03-28 04:48:14.628084 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:48:14.628113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:48:14.628127 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:48:14.628140 | orchestrator | 2026-03-28 04:48:14.628153 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-28 04:48:14.628167 | orchestrator | Saturday 28 March 2026 04:48:14 +0000 (0:00:02.430) 0:05:23.639 ******** 2026-03-28 04:48:14.628187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:48:29.183644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:48:29.183779 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:29.183799 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:29.183835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:48:29.183864 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:29.183876 | orchestrator | 2026-03-28 04:48:29.183889 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-28 04:48:29.183901 | orchestrator | Saturday 28 March 2026 04:48:15 +0000 (0:00:01.830) 0:05:25.469 ******** 2026-03-28 04:48:29.183913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 04:48:29.183927 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:29.183938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 04:48:29.183949 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:29.183961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 04:48:29.183972 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:29.183983 | orchestrator | 2026-03-28 04:48:29.183994 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-28 04:48:29.184005 | orchestrator | Saturday 28 March 2026 04:48:17 +0000 (0:00:01.435) 0:05:26.905 ******** 2026-03-28 04:48:29.184015 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:29.184026 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:29.184037 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:29.184048 | orchestrator | 2026-03-28 04:48:29.184059 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-28 04:48:29.184070 | orchestrator | Saturday 28 March 2026 04:48:18 +0000 (0:00:01.453) 0:05:28.358 ******** 2026-03-28 04:48:29.184081 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:29.184092 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:29.184102 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:29.184113 | orchestrator | 2026-03-28 04:48:29.184124 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-28 04:48:29.184135 | orchestrator | Saturday 28 March 2026 04:48:21 +0000 (0:00:02.277) 0:05:30.636 ******** 2026-03-28 04:48:29.184146 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:29.184157 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:29.184168 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:29.184181 | orchestrator | 2026-03-28 04:48:29.184193 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-28 04:48:29.184206 | orchestrator | Saturday 28 March 2026 04:48:22 +0000 (0:00:01.682) 0:05:32.319 ******** 2026-03-28 04:48:29.184218 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:48:29.184231 | orchestrator | 2026-03-28 04:48:29.184243 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-28 04:48:29.184264 | orchestrator | Saturday 28 March 2026 04:48:24 +0000 (0:00:02.019) 0:05:34.338 ******** 2026-03-28 04:48:29.184339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:48:29.184366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.184383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:29.184400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:29.184431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.438300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.438464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.438508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:29.438522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:29.438536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.438571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:29.438604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.438617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.438637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:29.438653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:29.438666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:48:29.438695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.550108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:29.550246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:48:29.550266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.550304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:29.550414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:29.550437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:29.550452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.550474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.550488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.550509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.674393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.674514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:29.674533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.674546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:29.674581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:29.674593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.674623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:29.674641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:29.674654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:29.674665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.674684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:29.674695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:29.674717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:31.946682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:31.946804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:31.946821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:31.946857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:31.946872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:31.946884 | orchestrator | 2026-03-28 04:48:31.946897 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-28 04:48:31.946910 | orchestrator | Saturday 28 March 2026 04:48:30 +0000 (0:00:06.047) 0:05:40.385 ******** 2026-03-28 04:48:31.946948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:48:31.946963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:31.946984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:31.946997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:31.947018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.034131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.034237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.034278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:32.034293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:32.034308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:48:32.034392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.034408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.034430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:32.034443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:32.034456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.034476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:32.034554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.119787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.119886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:32.119904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:48:32.119919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.119951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.120001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:32.120015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.120028 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:32.120042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 04:48:32.120055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:32.120073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 04:48:32.120100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:32.341543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.341619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.341629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:32.341638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.341646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.341687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:32.341708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.341715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 04:48:32.341723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:32.341732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:32.341743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:32.341755 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:32.341763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:32.341776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 04:48:49.065158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 04:48:49.065240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 04:48:49.065254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 04:48:49.065299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 04:48:49.065317 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:49.065332 | orchestrator | 2026-03-28 04:48:49.065347 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-28 04:48:49.065362 | orchestrator | Saturday 28 March 2026 04:48:33 +0000 (0:00:02.505) 0:05:42.891 ******** 2026-03-28 04:48:49.065378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065438 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:48:49.065447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065477 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:48:49.065485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:48:49.065502 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:48:49.065510 | orchestrator | 2026-03-28 04:48:49.065519 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-28 04:48:49.065527 | orchestrator | Saturday 28 March 2026 04:48:36 +0000 (0:00:03.063) 0:05:45.954 ******** 2026-03-28 04:48:49.065535 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:48:49.065543 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:48:49.065551 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:48:49.065559 | orchestrator | 2026-03-28 04:48:49.065567 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-28 04:48:49.065575 | orchestrator | Saturday 28 March 2026 04:48:38 +0000 (0:00:02.408) 0:05:48.363 ******** 2026-03-28 04:48:49.065583 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:48:49.065591 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:48:49.065599 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:48:49.065607 | orchestrator | 2026-03-28 04:48:49.065615 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-28 04:48:49.065623 | orchestrator | Saturday 28 March 2026 04:48:41 +0000 (0:00:02.973) 0:05:51.336 ******** 2026-03-28 04:48:49.065642 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:48:49.065651 | orchestrator | 2026-03-28 04:48:49.065659 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-28 04:48:49.065667 | orchestrator | Saturday 28 March 2026 04:48:44 +0000 (0:00:02.330) 0:05:53.666 ******** 2026-03-28 04:48:49.065676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:48:49.065690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:48:49.065707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:49:07.180267 | orchestrator | 2026-03-28 04:49:07.180378 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-28 04:49:07.180395 | orchestrator | Saturday 28 March 2026 04:48:49 +0000 (0:00:04.958) 0:05:58.625 ******** 2026-03-28 04:49:07.180411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:49:07.180449 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:07.180527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:49:07.180542 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:07.180569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:49:07.180600 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:07.180620 | orchestrator | 2026-03-28 04:49:07.180640 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-28 04:49:07.180658 | orchestrator | Saturday 28 March 2026 04:48:50 +0000 (0:00:01.558) 0:06:00.183 ******** 2026-03-28 04:49:07.180680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180756 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:07.180769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180792 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:07.180806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:49:07.180833 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:07.180845 | orchestrator | 2026-03-28 04:49:07.180859 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-28 04:49:07.180873 | orchestrator | Saturday 28 March 2026 04:48:52 +0000 (0:00:01.728) 0:06:01.912 ******** 2026-03-28 04:49:07.180886 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:07.180900 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:07.180914 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:07.180926 | orchestrator | 2026-03-28 04:49:07.180939 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-28 04:49:07.180952 | orchestrator | Saturday 28 March 2026 04:48:54 +0000 (0:00:02.308) 0:06:04.220 ******** 2026-03-28 04:49:07.180965 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:07.180982 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:07.181002 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:07.181021 | orchestrator | 2026-03-28 04:49:07.181040 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-28 04:49:07.181059 | orchestrator | Saturday 28 March 2026 04:48:57 +0000 (0:00:03.087) 0:06:07.308 ******** 2026-03-28 04:49:07.181087 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:49:07.181108 | orchestrator | 2026-03-28 04:49:07.181129 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-28 04:49:07.181150 | orchestrator | Saturday 28 March 2026 04:49:00 +0000 (0:00:02.852) 0:06:10.161 ******** 2026-03-28 04:49:07.181171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:07.181196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:08.291510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:08.291633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:08.291665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:08.291734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:49:08.291749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.291822 | orchestrator | 2026-03-28 04:49:08.291835 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-28 04:49:08.291855 | orchestrator | Saturday 28 March 2026 04:49:08 +0000 (0:00:07.689) 0:06:17.851 ******** 2026-03-28 04:49:08.999809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:08.999938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:08.999959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:08.999993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:09.000008 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:09.000042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:09.000056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:09.000075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:09.000087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:09.000106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:09.000119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:09.000142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:49:30.861841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 04:49:30.861976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 04:49:30.861998 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:30.862013 | orchestrator | 2026-03-28 04:49:30.862116 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-28 04:49:30.862130 | orchestrator | Saturday 28 March 2026 04:49:10 +0000 (0:00:01.823) 0:06:19.674 ******** 2026-03-28 04:49:30.862142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862195 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:30.862206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:30.862263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:49:30.862328 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:30.862339 | orchestrator | 2026-03-28 04:49:30.862351 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-28 04:49:30.862364 | orchestrator | Saturday 28 March 2026 04:49:12 +0000 (0:00:02.626) 0:06:22.301 ******** 2026-03-28 04:49:30.862389 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:30.862408 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:30.862422 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:30.862434 | orchestrator | 2026-03-28 04:49:30.862446 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-28 04:49:30.862459 | orchestrator | Saturday 28 March 2026 04:49:15 +0000 (0:00:02.303) 0:06:24.604 ******** 2026-03-28 04:49:30.862472 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:30.862484 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:30.862496 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:30.862508 | orchestrator | 2026-03-28 04:49:30.862521 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-28 04:49:30.862534 | orchestrator | Saturday 28 March 2026 04:49:17 +0000 (0:00:02.858) 0:06:27.462 ******** 2026-03-28 04:49:30.862572 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:49:30.862585 | orchestrator | 2026-03-28 04:49:30.862597 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-28 04:49:30.862609 | orchestrator | Saturday 28 March 2026 04:49:20 +0000 (0:00:02.902) 0:06:30.365 ******** 2026-03-28 04:49:30.862623 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-28 04:49:30.862636 | orchestrator | 2026-03-28 04:49:30.862649 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-28 04:49:30.862661 | orchestrator | Saturday 28 March 2026 04:49:22 +0000 (0:00:01.667) 0:06:32.033 ******** 2026-03-28 04:49:30.862676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 04:49:30.862691 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 04:49:30.862705 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 04:49:30.862718 | orchestrator | 2026-03-28 04:49:30.862729 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-28 04:49:30.862742 | orchestrator | Saturday 28 March 2026 04:49:28 +0000 (0:00:05.795) 0:06:37.829 ******** 2026-03-28 04:49:30.862753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:30.862781 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:54.893605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.893869 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:54.893911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.893927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:54.893939 | orchestrator | 2026-03-28 04:49:54.893952 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-28 04:49:54.893965 | orchestrator | Saturday 28 March 2026 04:49:30 +0000 (0:00:02.590) 0:06:40.419 ******** 2026-03-28 04:49:54.893977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.893992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.894005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:54.894081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.894097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.894108 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:54.894120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.894142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 04:49:54.894155 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:54.894168 | orchestrator | 2026-03-28 04:49:54.894182 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 04:49:54.894195 | orchestrator | Saturday 28 March 2026 04:49:33 +0000 (0:00:02.587) 0:06:43.006 ******** 2026-03-28 04:49:54.894207 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:54.894222 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:54.894234 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:54.894246 | orchestrator | 2026-03-28 04:49:54.894259 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 04:49:54.894272 | orchestrator | Saturday 28 March 2026 04:49:37 +0000 (0:00:04.003) 0:06:47.010 ******** 2026-03-28 04:49:54.894307 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:54.894321 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:54.894333 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:54.894346 | orchestrator | 2026-03-28 04:49:54.894358 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-28 04:49:54.894371 | orchestrator | Saturday 28 March 2026 04:49:41 +0000 (0:00:04.027) 0:06:51.038 ******** 2026-03-28 04:49:54.894384 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-28 04:49:54.894397 | orchestrator | 2026-03-28 04:49:54.894410 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-28 04:49:54.894423 | orchestrator | Saturday 28 March 2026 04:49:43 +0000 (0:00:01.696) 0:06:52.734 ******** 2026-03-28 04:49:54.894457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894472 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:54.894492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:54.894515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894527 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:54.894538 | orchestrator | 2026-03-28 04:49:54.894549 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-28 04:49:54.894560 | orchestrator | Saturday 28 March 2026 04:49:45 +0000 (0:00:02.610) 0:06:55.345 ******** 2026-03-28 04:49:54.894572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:54.894595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894614 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:54.894646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 04:49:54.894658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:54.894670 | orchestrator | 2026-03-28 04:49:54.894681 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-28 04:49:54.894692 | orchestrator | Saturday 28 March 2026 04:49:48 +0000 (0:00:02.532) 0:06:57.878 ******** 2026-03-28 04:49:54.894703 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:49:54.894714 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:49:54.894725 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:49:54.894735 | orchestrator | 2026-03-28 04:49:54.894746 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 04:49:54.894757 | orchestrator | Saturday 28 March 2026 04:49:50 +0000 (0:00:02.350) 0:07:00.229 ******** 2026-03-28 04:49:54.894768 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:49:54.894779 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:49:54.894790 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:49:54.894801 | orchestrator | 2026-03-28 04:49:54.894812 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 04:49:54.894823 | orchestrator | Saturday 28 March 2026 04:49:54 +0000 (0:00:04.218) 0:07:04.448 ******** 2026-03-28 04:50:23.801084 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:50:23.801206 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:50:23.801221 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:50:23.801234 | orchestrator | 2026-03-28 04:50:23.801246 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-28 04:50:23.801258 | orchestrator | Saturday 28 March 2026 04:49:58 +0000 (0:00:04.040) 0:07:08.489 ******** 2026-03-28 04:50:23.801270 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-28 04:50:23.801282 | orchestrator | 2026-03-28 04:50:23.801293 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-28 04:50:23.801305 | orchestrator | Saturday 28 March 2026 04:50:01 +0000 (0:00:02.475) 0:07:10.964 ******** 2026-03-28 04:50:23.801336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:23.801365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801377 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:23.801410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801422 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:23.801433 | orchestrator | 2026-03-28 04:50:23.801444 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-28 04:50:23.801456 | orchestrator | Saturday 28 March 2026 04:50:04 +0000 (0:00:02.675) 0:07:13.640 ******** 2026-03-28 04:50:23.801467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801478 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:23.801490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801501 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:23.801530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 04:50:23.801542 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:23.801553 | orchestrator | 2026-03-28 04:50:23.801564 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-28 04:50:23.801575 | orchestrator | Saturday 28 March 2026 04:50:06 +0000 (0:00:02.905) 0:07:16.545 ******** 2026-03-28 04:50:23.801586 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:23.801597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:23.801608 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:23.801622 | orchestrator | 2026-03-28 04:50:23.801635 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 04:50:23.801676 | orchestrator | Saturday 28 March 2026 04:50:09 +0000 (0:00:02.590) 0:07:19.135 ******** 2026-03-28 04:50:23.801713 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:50:23.801747 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:50:23.801776 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:50:23.801790 | orchestrator | 2026-03-28 04:50:23.801803 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 04:50:23.801828 | orchestrator | Saturday 28 March 2026 04:50:12 +0000 (0:00:03.304) 0:07:22.440 ******** 2026-03-28 04:50:23.801848 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:50:23.801861 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:50:23.801882 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:50:23.801895 | orchestrator | 2026-03-28 04:50:23.801907 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-28 04:50:23.801920 | orchestrator | Saturday 28 March 2026 04:50:17 +0000 (0:00:04.388) 0:07:26.828 ******** 2026-03-28 04:50:23.801933 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:50:23.801946 | orchestrator | 2026-03-28 04:50:23.801959 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-28 04:50:23.801972 | orchestrator | Saturday 28 March 2026 04:50:19 +0000 (0:00:02.630) 0:07:29.459 ******** 2026-03-28 04:50:23.801985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 04:50:23.801999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:23.802012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:23.802097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:26.099311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:26.099492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 04:50:26.099516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 04:50:26.099530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:26.099543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:26.099576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:26.099628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:26.099650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:26.099662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:26.099673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:26.099684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:26.099696 | orchestrator | 2026-03-28 04:50:26.099709 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-28 04:50:26.099722 | orchestrator | Saturday 28 March 2026 04:50:25 +0000 (0:00:05.150) 0:07:34.609 ******** 2026-03-28 04:50:26.099773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 04:50:27.285975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:27.286205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:27.286236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:27.286259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:27.286280 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:27.286304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 04:50:27.286366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:27.286430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:27.286455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:27.286476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:27.286497 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:27.286516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 04:50:27.286540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 04:50:27.286586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 04:50:44.844170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 04:50:44.844297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 04:50:44.844315 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:44.844330 | orchestrator | 2026-03-28 04:50:44.844343 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-28 04:50:44.844355 | orchestrator | Saturday 28 March 2026 04:50:27 +0000 (0:00:02.240) 0:07:36.850 ******** 2026-03-28 04:50:44.844367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844393 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:44.844405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844427 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:44.844439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 04:50:44.844481 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:44.844493 | orchestrator | 2026-03-28 04:50:44.844504 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-28 04:50:44.844515 | orchestrator | Saturday 28 March 2026 04:50:29 +0000 (0:00:02.152) 0:07:39.002 ******** 2026-03-28 04:50:44.844525 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:50:44.844537 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:50:44.844548 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:50:44.844559 | orchestrator | 2026-03-28 04:50:44.844570 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-28 04:50:44.844581 | orchestrator | Saturday 28 March 2026 04:50:31 +0000 (0:00:02.373) 0:07:41.376 ******** 2026-03-28 04:50:44.844591 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:50:44.844602 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:50:44.844613 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:50:44.844624 | orchestrator | 2026-03-28 04:50:44.844635 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-28 04:50:44.844646 | orchestrator | Saturday 28 March 2026 04:50:34 +0000 (0:00:03.174) 0:07:44.551 ******** 2026-03-28 04:50:44.844657 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:50:44.844670 | orchestrator | 2026-03-28 04:50:44.844696 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-28 04:50:44.844708 | orchestrator | Saturday 28 March 2026 04:50:37 +0000 (0:00:02.593) 0:07:47.144 ******** 2026-03-28 04:50:44.844748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:50:44.844768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:50:44.844782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:50:44.844843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:50:44.844889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:50:49.330596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:50:49.330713 | orchestrator | 2026-03-28 04:50:49.330741 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-28 04:50:49.330797 | orchestrator | Saturday 28 March 2026 04:50:44 +0000 (0:00:07.256) 0:07:54.401 ******** 2026-03-28 04:50:49.330903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:50:49.330925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:50:49.330946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:49.331012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:50:49.331037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:50:49.331073 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:49.331095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:50:49.331124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:50:49.331148 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:49.331167 | orchestrator | 2026-03-28 04:50:49.331189 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-28 04:50:49.331210 | orchestrator | Saturday 28 March 2026 04:50:47 +0000 (0:00:02.657) 0:07:57.058 ******** 2026-03-28 04:50:49.331233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:50:49.331267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.936944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.937036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:58.937048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:50:58.937074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.937083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.937090 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:58.937096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:50:58.937103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.937110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 04:50:58.937116 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:58.937123 | orchestrator | 2026-03-28 04:50:58.937130 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-28 04:50:58.937138 | orchestrator | Saturday 28 March 2026 04:50:49 +0000 (0:00:01.836) 0:07:58.895 ******** 2026-03-28 04:50:58.937145 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:58.937151 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:58.937157 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:58.937163 | orchestrator | 2026-03-28 04:50:58.937170 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-28 04:50:58.937176 | orchestrator | Saturday 28 March 2026 04:50:50 +0000 (0:00:01.581) 0:08:00.476 ******** 2026-03-28 04:50:58.937182 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:50:58.937189 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:50:58.937195 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:50:58.937201 | orchestrator | 2026-03-28 04:50:58.937207 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-28 04:50:58.937213 | orchestrator | Saturday 28 March 2026 04:50:53 +0000 (0:00:02.392) 0:08:02.869 ******** 2026-03-28 04:50:58.937220 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:50:58.937227 | orchestrator | 2026-03-28 04:50:58.937233 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-28 04:50:58.937239 | orchestrator | Saturday 28 March 2026 04:50:56 +0000 (0:00:02.712) 0:08:05.581 ******** 2026-03-28 04:50:58.937274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 04:50:58.937291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:50:58.937299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:50:58.937306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:50:58.937314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:50:58.937325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 04:50:58.937332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:50:58.937349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:00.686804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:00.686955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:51:00.686978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 04:51:00.686993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:51:00.687024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:00.687056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:00.687087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:51:00.687100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:51:00.687114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:00.687127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:00.687146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:51:00.687175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.516242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:03.516348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:03.516365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.516378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.516432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:03.516448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:51:03.516483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:03.516496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.516508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.516519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:03.516540 | orchestrator | 2026-03-28 04:51:03.516554 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-28 04:51:03.516571 | orchestrator | Saturday 28 March 2026 04:51:02 +0000 (0:00:06.290) 0:08:11.872 ******** 2026-03-28 04:51:03.516585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 04:51:03.516606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:51:03.686293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:51:03.686433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:51:03.686477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:03.686506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:03.686543 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:03.686557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 04:51:03.686582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:51:03.686594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:03.686625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:51:04.881366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:51:04.881499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:04.881528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:04.881539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:04.881548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:04.881572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 04:51:04.881582 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:04.881592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 04:51:04.881608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:04.881621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:04.881631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 04:51:04.881640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:51:04.881656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 04:51:18.123352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:18.123443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 04:51:18.123468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 04:51:18.123477 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:18.123486 | orchestrator | 2026-03-28 04:51:18.123494 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-28 04:51:18.123503 | orchestrator | Saturday 28 March 2026 04:51:04 +0000 (0:00:02.573) 0:08:14.446 ******** 2026-03-28 04:51:18.123511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123547 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:18.123555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123611 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:18.123618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 04:51:18.123637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 04:51:18.123651 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:18.123658 | orchestrator | 2026-03-28 04:51:18.123665 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-28 04:51:18.123672 | orchestrator | Saturday 28 March 2026 04:51:06 +0000 (0:00:02.028) 0:08:16.475 ******** 2026-03-28 04:51:18.123678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:18.123685 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:18.123692 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:18.123698 | orchestrator | 2026-03-28 04:51:18.123705 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-28 04:51:18.123712 | orchestrator | Saturday 28 March 2026 04:51:09 +0000 (0:00:02.325) 0:08:18.800 ******** 2026-03-28 04:51:18.123719 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:18.123725 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:18.123732 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:18.123738 | orchestrator | 2026-03-28 04:51:18.123745 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-28 04:51:18.123752 | orchestrator | Saturday 28 March 2026 04:51:11 +0000 (0:00:02.395) 0:08:21.195 ******** 2026-03-28 04:51:18.123759 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:51:18.123765 | orchestrator | 2026-03-28 04:51:18.123772 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-28 04:51:18.123783 | orchestrator | Saturday 28 March 2026 04:51:13 +0000 (0:00:02.308) 0:08:23.504 ******** 2026-03-28 04:51:18.123797 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 04:51:36.991595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 04:51:36.991756 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 04:51:36.991786 | orchestrator | 2026-03-28 04:51:36.991808 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-28 04:51:36.991828 | orchestrator | Saturday 28 March 2026 04:51:18 +0000 (0:00:04.174) 0:08:27.678 ******** 2026-03-28 04:51:36.991847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 04:51:36.991894 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:36.991939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 04:51:36.991990 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:36.992010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 04:51:36.992029 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:36.992048 | orchestrator | 2026-03-28 04:51:36.992067 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-28 04:51:36.992094 | orchestrator | Saturday 28 March 2026 04:51:19 +0000 (0:00:01.563) 0:08:29.242 ******** 2026-03-28 04:51:36.992116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 04:51:36.992137 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:36.992158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 04:51:36.992178 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:36.992197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 04:51:36.992217 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:36.992236 | orchestrator | 2026-03-28 04:51:36.992255 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-28 04:51:36.992275 | orchestrator | Saturday 28 March 2026 04:51:21 +0000 (0:00:01.604) 0:08:30.846 ******** 2026-03-28 04:51:36.992297 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:36.992316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:36.992348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:36.992368 | orchestrator | 2026-03-28 04:51:36.992387 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-28 04:51:36.992404 | orchestrator | Saturday 28 March 2026 04:51:23 +0000 (0:00:01.979) 0:08:32.825 ******** 2026-03-28 04:51:36.992425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:36.992445 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:51:36.992464 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:51:36.992482 | orchestrator | 2026-03-28 04:51:36.992500 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-28 04:51:36.992518 | orchestrator | Saturday 28 March 2026 04:51:25 +0000 (0:00:02.351) 0:08:35.177 ******** 2026-03-28 04:51:36.992536 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:51:36.992555 | orchestrator | 2026-03-28 04:51:36.992573 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-28 04:51:36.992591 | orchestrator | Saturday 28 March 2026 04:51:28 +0000 (0:00:02.465) 0:08:37.642 ******** 2026-03-28 04:51:36.992611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 04:51:36.992647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 04:51:38.764906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 04:51:38.765106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:51:38.765130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:51:38.765162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 04:51:38.765176 | orchestrator | 2026-03-28 04:51:38.765190 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-28 04:51:38.765202 | orchestrator | Saturday 28 March 2026 04:51:36 +0000 (0:00:08.909) 0:08:46.552 ******** 2026-03-28 04:51:38.765221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 04:51:38.765243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:51:38.765255 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:51:38.765268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 04:51:38.765288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:52:01.136541 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.136685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 04:52:01.136707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 04:52:01.136720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.136732 | orchestrator | 2026-03-28 04:52:01.136744 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-28 04:52:01.136756 | orchestrator | Saturday 28 March 2026 04:51:38 +0000 (0:00:01.774) 0:08:48.326 ******** 2026-03-28 04:52:01.136769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.136808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.136820 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.136831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.136915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 04:52:01.136955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.136974 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.136992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.137040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 04:52:01.137060 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137079 | orchestrator | 2026-03-28 04:52:01.137097 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-28 04:52:01.137116 | orchestrator | Saturday 28 March 2026 04:51:40 +0000 (0:00:02.116) 0:08:50.443 ******** 2026-03-28 04:52:01.137136 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:52:01.137156 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:52:01.137174 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:52:01.137194 | orchestrator | 2026-03-28 04:52:01.137213 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-28 04:52:01.137231 | orchestrator | Saturday 28 March 2026 04:51:43 +0000 (0:00:02.329) 0:08:52.773 ******** 2026-03-28 04:52:01.137250 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:52:01.137267 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:52:01.137286 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:52:01.137306 | orchestrator | 2026-03-28 04:52:01.137325 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-28 04:52:01.137346 | orchestrator | Saturday 28 March 2026 04:51:46 +0000 (0:00:03.240) 0:08:56.013 ******** 2026-03-28 04:52:01.137365 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.137385 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.137404 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137424 | orchestrator | 2026-03-28 04:52:01.137442 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-28 04:52:01.137459 | orchestrator | Saturday 28 March 2026 04:51:47 +0000 (0:00:01.467) 0:08:57.481 ******** 2026-03-28 04:52:01.137470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.137481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.137492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137502 | orchestrator | 2026-03-28 04:52:01.137514 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-28 04:52:01.137540 | orchestrator | Saturday 28 March 2026 04:51:49 +0000 (0:00:01.339) 0:08:58.820 ******** 2026-03-28 04:52:01.137551 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.137562 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.137573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137583 | orchestrator | 2026-03-28 04:52:01.137594 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-28 04:52:01.137605 | orchestrator | Saturday 28 March 2026 04:51:51 +0000 (0:00:01.821) 0:09:00.642 ******** 2026-03-28 04:52:01.137616 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.137627 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.137637 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137648 | orchestrator | 2026-03-28 04:52:01.137659 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-28 04:52:01.137670 | orchestrator | Saturday 28 March 2026 04:51:52 +0000 (0:00:01.411) 0:09:02.053 ******** 2026-03-28 04:52:01.137681 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:01.137692 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:52:01.137702 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:52:01.137713 | orchestrator | 2026-03-28 04:52:01.137724 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-28 04:52:01.137735 | orchestrator | Saturday 28 March 2026 04:51:53 +0000 (0:00:01.433) 0:09:03.487 ******** 2026-03-28 04:52:01.137746 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:52:01.137758 | orchestrator | 2026-03-28 04:52:01.137769 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-28 04:52:01.137779 | orchestrator | Saturday 28 March 2026 04:51:56 +0000 (0:00:03.028) 0:09:06.516 ******** 2026-03-28 04:52:01.137815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 04:52:05.909540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:52:05.909556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:52:05.909561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 04:52:05.909565 | orchestrator | 2026-03-28 04:52:05.909570 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-28 04:52:05.909575 | orchestrator | Saturday 28 March 2026 04:52:01 +0000 (0:00:04.179) 0:09:10.695 ******** 2026-03-28 04:52:05.909580 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:52:05.909585 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:52:05.909589 | orchestrator | } 2026-03-28 04:52:05.909593 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:52:05.909601 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:52:05.909605 | orchestrator | } 2026-03-28 04:52:05.909610 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:52:05.909613 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:52:05.909617 | orchestrator | } 2026-03-28 04:52:05.909621 | orchestrator | 2026-03-28 04:52:05.909626 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:52:05.909630 | orchestrator | Saturday 28 March 2026 04:52:02 +0000 (0:00:01.504) 0:09:12.200 ******** 2026-03-28 04:52:05.909634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 04:52:05.909638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:52:05.909642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:52:05.909646 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:52:05.909653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 04:52:05.909661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:54:07.793938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:54:07.794137 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.794159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 04:54:07.794174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 04:54:07.794186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 04:54:07.794197 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.794209 | orchestrator | 2026-03-28 04:54:07.794221 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-28 04:54:07.794233 | orchestrator | Saturday 28 March 2026 04:52:05 +0000 (0:00:03.267) 0:09:15.467 ******** 2026-03-28 04:54:07.794245 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.794257 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.794268 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.794279 | orchestrator | 2026-03-28 04:54:07.794344 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-28 04:54:07.794362 | orchestrator | Saturday 28 March 2026 04:52:07 +0000 (0:00:01.884) 0:09:17.352 ******** 2026-03-28 04:54:07.794381 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.794397 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.794413 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.794430 | orchestrator | 2026-03-28 04:54:07.794467 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-28 04:54:07.794489 | orchestrator | Saturday 28 March 2026 04:52:09 +0000 (0:00:01.439) 0:09:18.791 ******** 2026-03-28 04:54:07.794508 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.794528 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.794547 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.794566 | orchestrator | 2026-03-28 04:54:07.794589 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-28 04:54:07.794612 | orchestrator | Saturday 28 March 2026 04:52:16 +0000 (0:00:07.209) 0:09:26.001 ******** 2026-03-28 04:54:07.794633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.794670 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.794691 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.794711 | orchestrator | 2026-03-28 04:54:07.794732 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-28 04:54:07.794752 | orchestrator | Saturday 28 March 2026 04:52:24 +0000 (0:00:07.760) 0:09:33.762 ******** 2026-03-28 04:54:07.794772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.794793 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.794813 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.794834 | orchestrator | 2026-03-28 04:54:07.794855 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-28 04:54:07.794875 | orchestrator | Saturday 28 March 2026 04:52:31 +0000 (0:00:07.150) 0:09:40.913 ******** 2026-03-28 04:54:07.794893 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.794912 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.794932 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.794952 | orchestrator | 2026-03-28 04:54:07.794996 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-28 04:54:07.795117 | orchestrator | Saturday 28 March 2026 04:52:38 +0000 (0:00:07.567) 0:09:48.481 ******** 2026-03-28 04:54:07.795138 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.795159 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.795179 | orchestrator | 2026-03-28 04:54:07.795199 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-28 04:54:07.795219 | orchestrator | Saturday 28 March 2026 04:52:42 +0000 (0:00:03.704) 0:09:52.185 ******** 2026-03-28 04:54:07.795239 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.795258 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.795278 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.795327 | orchestrator | 2026-03-28 04:54:07.795348 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-28 04:54:07.795366 | orchestrator | Saturday 28 March 2026 04:52:55 +0000 (0:00:13.342) 0:10:05.528 ******** 2026-03-28 04:54:07.795387 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.795406 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.795426 | orchestrator | 2026-03-28 04:54:07.795447 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-28 04:54:07.795466 | orchestrator | Saturday 28 March 2026 04:52:59 +0000 (0:00:03.718) 0:10:09.247 ******** 2026-03-28 04:54:07.795484 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:07.795496 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:54:07.795507 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:54:07.795518 | orchestrator | 2026-03-28 04:54:07.795530 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-28 04:54:07.795541 | orchestrator | Saturday 28 March 2026 04:53:07 +0000 (0:00:07.420) 0:10:16.667 ******** 2026-03-28 04:54:07.795552 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.795563 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.795574 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.795585 | orchestrator | 2026-03-28 04:54:07.795597 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-28 04:54:07.795608 | orchestrator | Saturday 28 March 2026 04:53:13 +0000 (0:00:06.800) 0:10:23.467 ******** 2026-03-28 04:54:07.795620 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.795631 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.795642 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.795653 | orchestrator | 2026-03-28 04:54:07.795664 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-28 04:54:07.795676 | orchestrator | Saturday 28 March 2026 04:53:20 +0000 (0:00:06.889) 0:10:30.357 ******** 2026-03-28 04:54:07.795687 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.795698 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.795709 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.795720 | orchestrator | 2026-03-28 04:54:07.795731 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-28 04:54:07.795756 | orchestrator | Saturday 28 March 2026 04:53:27 +0000 (0:00:06.864) 0:10:37.222 ******** 2026-03-28 04:54:07.795767 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.795778 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.795791 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.795810 | orchestrator | 2026-03-28 04:54:07.795829 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-28 04:54:07.795848 | orchestrator | Saturday 28 March 2026 04:53:34 +0000 (0:00:07.180) 0:10:44.402 ******** 2026-03-28 04:54:07.795867 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.795887 | orchestrator | 2026-03-28 04:54:07.795906 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-28 04:54:07.795925 | orchestrator | Saturday 28 March 2026 04:53:38 +0000 (0:00:03.596) 0:10:47.999 ******** 2026-03-28 04:54:07.795944 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.795962 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.795981 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.796000 | orchestrator | 2026-03-28 04:54:07.796020 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-28 04:54:07.796040 | orchestrator | Saturday 28 March 2026 04:53:51 +0000 (0:00:13.017) 0:11:01.016 ******** 2026-03-28 04:54:07.796059 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.796078 | orchestrator | 2026-03-28 04:54:07.796090 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-28 04:54:07.796101 | orchestrator | Saturday 28 March 2026 04:53:56 +0000 (0:00:04.685) 0:11:05.702 ******** 2026-03-28 04:54:07.796112 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:07.796123 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:07.796134 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:54:07.796145 | orchestrator | 2026-03-28 04:54:07.796167 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-28 04:54:07.796178 | orchestrator | Saturday 28 March 2026 04:54:03 +0000 (0:00:07.003) 0:11:12.705 ******** 2026-03-28 04:54:07.796189 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.796200 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.796211 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.796222 | orchestrator | 2026-03-28 04:54:07.796233 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-28 04:54:07.796244 | orchestrator | Saturday 28 March 2026 04:54:05 +0000 (0:00:02.002) 0:11:14.708 ******** 2026-03-28 04:54:07.796255 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:07.796266 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:07.796277 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:07.796311 | orchestrator | 2026-03-28 04:54:07.796323 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:54:07.796335 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 04:54:07.796349 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 04:54:07.796374 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 04:54:08.754387 | orchestrator | 2026-03-28 04:54:08.754490 | orchestrator | 2026-03-28 04:54:08.754507 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:54:08.754520 | orchestrator | Saturday 28 March 2026 04:54:07 +0000 (0:00:02.637) 0:11:17.346 ******** 2026-03-28 04:54:08.754532 | orchestrator | =============================================================================== 2026-03-28 04:54:08.754543 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.34s 2026-03-28 04:54:08.754554 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.02s 2026-03-28 04:54:08.754594 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.91s 2026-03-28 04:54:08.754605 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.76s 2026-03-28 04:54:08.754616 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.69s 2026-03-28 04:54:08.754627 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.57s 2026-03-28 04:54:08.754638 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.42s 2026-03-28 04:54:08.754649 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.26s 2026-03-28 04:54:08.754660 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.21s 2026-03-28 04:54:08.754671 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.18s 2026-03-28 04:54:08.754682 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.15s 2026-03-28 04:54:08.754693 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.00s 2026-03-28 04:54:08.754703 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.89s 2026-03-28 04:54:08.754714 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.86s 2026-03-28 04:54:08.754725 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.80s 2026-03-28 04:54:08.754736 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.29s 2026-03-28 04:54:08.754748 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.05s 2026-03-28 04:54:08.754759 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.83s 2026-03-28 04:54:08.754770 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.80s 2026-03-28 04:54:08.754781 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.15s 2026-03-28 04:54:09.105566 | orchestrator | + osism apply -a upgrade opensearch 2026-03-28 04:54:11.197567 | orchestrator | 2026-03-28 04:54:11 | INFO  | Task 67d83c0d-9f4b-4e84-a461-257651f1d0ab (opensearch) was prepared for execution. 2026-03-28 04:54:11.197661 | orchestrator | 2026-03-28 04:54:11 | INFO  | It takes a moment until task 67d83c0d-9f4b-4e84-a461-257651f1d0ab (opensearch) has been started and output is visible here. 2026-03-28 04:54:23.172383 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-28 04:54:23.172538 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-28 04:54:23.172571 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-28 04:54:23.172583 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-28 04:54:23.172606 | orchestrator | 2026-03-28 04:54:23.172617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:54:23.172627 | orchestrator | 2026-03-28 04:54:23.172637 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:54:23.172647 | orchestrator | Saturday 28 March 2026 04:54:16 +0000 (0:00:01.035) 0:00:01.035 ******** 2026-03-28 04:54:23.172657 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:23.172668 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:23.172678 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:23.172688 | orchestrator | 2026-03-28 04:54:23.172715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:54:23.172726 | orchestrator | Saturday 28 March 2026 04:54:17 +0000 (0:00:00.915) 0:00:01.950 ******** 2026-03-28 04:54:23.172736 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-28 04:54:23.172746 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-28 04:54:23.172756 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-28 04:54:23.172786 | orchestrator | 2026-03-28 04:54:23.172797 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-28 04:54:23.172807 | orchestrator | 2026-03-28 04:54:23.172818 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 04:54:23.172829 | orchestrator | Saturday 28 March 2026 04:54:18 +0000 (0:00:00.823) 0:00:02.773 ******** 2026-03-28 04:54:23.172840 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:54:23.172852 | orchestrator | 2026-03-28 04:54:23.172863 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-28 04:54:23.172874 | orchestrator | Saturday 28 March 2026 04:54:19 +0000 (0:00:01.085) 0:00:03.859 ******** 2026-03-28 04:54:23.172885 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 04:54:23.172896 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 04:54:23.172907 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 04:54:23.172918 | orchestrator | 2026-03-28 04:54:23.172929 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-28 04:54:23.172941 | orchestrator | Saturday 28 March 2026 04:54:21 +0000 (0:00:02.388) 0:00:06.247 ******** 2026-03-28 04:54:23.172955 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:23.172972 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:23.173003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:23.173030 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:23.173044 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:23.173065 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:27.641860 | orchestrator | 2026-03-28 04:54:27.641958 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 04:54:27.641974 | orchestrator | Saturday 28 March 2026 04:54:23 +0000 (0:00:01.452) 0:00:07.699 ******** 2026-03-28 04:54:27.641987 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:54:27.642080 | orchestrator | 2026-03-28 04:54:27.642094 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-28 04:54:27.642105 | orchestrator | Saturday 28 March 2026 04:54:24 +0000 (0:00:00.952) 0:00:08.652 ******** 2026-03-28 04:54:27.642133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:27.642150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:27.642161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:27.642200 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:27.642241 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:27.642264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:27.642284 | orchestrator | 2026-03-28 04:54:27.642303 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-28 04:54:27.642380 | orchestrator | Saturday 28 March 2026 04:54:26 +0000 (0:00:02.647) 0:00:11.299 ******** 2026-03-28 04:54:27.642404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:27.642443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:28.742080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:28.742169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:28.742181 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:28.742190 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:28.742198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:28.742239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:28.742248 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:28.742255 | orchestrator | 2026-03-28 04:54:28.742262 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-28 04:54:28.742269 | orchestrator | Saturday 28 March 2026 04:54:27 +0000 (0:00:00.878) 0:00:12.178 ******** 2026-03-28 04:54:28.742276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:28.742283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:28.742290 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:54:28.742297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:28.742318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:31.506131 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:54:31.506243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:54:31.506265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:54:31.506307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:54:31.506321 | orchestrator | 2026-03-28 04:54:31.506366 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-28 04:54:31.506379 | orchestrator | Saturday 28 March 2026 04:54:28 +0000 (0:00:01.092) 0:00:13.271 ******** 2026-03-28 04:54:31.506390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:31.506433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:31.506448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:31.506460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:31.506482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:31.506509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:40.497230 | orchestrator | 2026-03-28 04:54:40.497397 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-28 04:54:40.497429 | orchestrator | Saturday 28 March 2026 04:54:31 +0000 (0:00:02.762) 0:00:16.034 ******** 2026-03-28 04:54:40.497450 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:40.497470 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:40.497488 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:40.497506 | orchestrator | 2026-03-28 04:54:40.497525 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-28 04:54:40.497544 | orchestrator | Saturday 28 March 2026 04:54:33 +0000 (0:00:02.482) 0:00:18.516 ******** 2026-03-28 04:54:40.497563 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:54:40.497583 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:54:40.497600 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:54:40.497617 | orchestrator | 2026-03-28 04:54:40.497636 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-28 04:54:40.497653 | orchestrator | Saturday 28 March 2026 04:54:36 +0000 (0:00:02.060) 0:00:20.577 ******** 2026-03-28 04:54:40.497675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:40.497731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:40.497768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 04:54:40.497817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:40.497840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:40.497874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 04:54:40.497893 | orchestrator | 2026-03-28 04:54:40.497909 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-28 04:54:40.497928 | orchestrator | Saturday 28 March 2026 04:54:38 +0000 (0:00:02.833) 0:00:23.410 ******** 2026-03-28 04:54:40.497953 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:54:40.497971 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:54:40.497988 | orchestrator | } 2026-03-28 04:54:40.498005 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:54:40.498098 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:54:40.498120 | orchestrator | } 2026-03-28 04:54:40.498135 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:54:40.498152 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:54:40.498169 | orchestrator | } 2026-03-28 04:54:40.498185 | orchestrator | 2026-03-28 04:54:40.498203 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:54:40.498220 | orchestrator | Saturday 28 March 2026 04:54:39 +0000 (0:00:00.367) 0:00:23.778 ******** 2026-03-28 04:54:40.498258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:57:39.994099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:57:39.994200 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:57:39.994216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:57:39.994243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:57:39.994253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:57:39.994277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 04:57:39.994304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 04:57:39.994314 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:57:39.994322 | orchestrator | 2026-03-28 04:57:39.994331 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 04:57:39.994340 | orchestrator | Saturday 28 March 2026 04:54:40 +0000 (0:00:01.254) 0:00:25.032 ******** 2026-03-28 04:57:39.994349 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:57:39.994357 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-28 04:57:39.994365 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-28 04:57:39.994381 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:57:39.994389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:57:39.994396 | orchestrator | 2026-03-28 04:57:39.994404 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 04:57:39.994411 | orchestrator | Saturday 28 March 2026 04:54:41 +0000 (0:00:00.560) 0:00:25.592 ******** 2026-03-28 04:57:39.994419 | orchestrator | 2026-03-28 04:57:39.994427 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 04:57:39.994434 | orchestrator | Saturday 28 March 2026 04:54:41 +0000 (0:00:00.075) 0:00:25.667 ******** 2026-03-28 04:57:39.994441 | orchestrator | 2026-03-28 04:57:39.994447 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 04:57:39.994455 | orchestrator | Saturday 28 March 2026 04:54:41 +0000 (0:00:00.075) 0:00:25.743 ******** 2026-03-28 04:57:39.994462 | orchestrator | 2026-03-28 04:57:39.994469 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-28 04:57:39.994477 | orchestrator | Saturday 28 March 2026 04:54:41 +0000 (0:00:00.073) 0:00:25.817 ******** 2026-03-28 04:57:39.994484 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:57:39.994492 | orchestrator | 2026-03-28 04:57:39.994500 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-28 04:57:39.994513 | orchestrator | Saturday 28 March 2026 04:54:43 +0000 (0:00:02.526) 0:00:28.344 ******** 2026-03-28 04:57:39.994521 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:57:39.994530 | orchestrator | 2026-03-28 04:57:39.994538 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-28 04:57:39.994545 | orchestrator | Saturday 28 March 2026 04:54:47 +0000 (0:00:03.550) 0:00:31.894 ******** 2026-03-28 04:57:39.994553 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:57:39.994570 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:57:39.994578 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:57:39.994585 | orchestrator | 2026-03-28 04:57:39.994592 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-28 04:57:39.994599 | orchestrator | Saturday 28 March 2026 04:56:03 +0000 (0:01:15.930) 0:01:47.825 ******** 2026-03-28 04:57:39.994606 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:57:39.994613 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:57:39.994620 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:57:39.994628 | orchestrator | 2026-03-28 04:57:39.994635 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 04:57:39.994642 | orchestrator | Saturday 28 March 2026 04:57:34 +0000 (0:01:31.114) 0:03:18.940 ******** 2026-03-28 04:57:39.994649 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:57:39.994657 | orchestrator | 2026-03-28 04:57:39.994665 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-28 04:57:39.994698 | orchestrator | Saturday 28 March 2026 04:57:35 +0000 (0:00:01.076) 0:03:20.016 ******** 2026-03-28 04:57:39.994707 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:57:39.994713 | orchestrator | 2026-03-28 04:57:39.994720 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-28 04:57:39.994728 | orchestrator | Saturday 28 March 2026 04:57:37 +0000 (0:00:02.265) 0:03:22.282 ******** 2026-03-28 04:57:39.994735 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:57:39.994742 | orchestrator | 2026-03-28 04:57:39.994758 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-28 04:57:42.244223 | orchestrator | Saturday 28 March 2026 04:57:39 +0000 (0:00:02.238) 0:03:24.521 ******** 2026-03-28 04:57:42.244328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:57:42.244347 | orchestrator | 2026-03-28 04:57:42.244360 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-28 04:57:42.244372 | orchestrator | Saturday 28 March 2026 04:57:40 +0000 (0:00:00.259) 0:03:24.781 ******** 2026-03-28 04:57:42.244383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:57:42.244394 | orchestrator | 2026-03-28 04:57:42.244405 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:57:42.244417 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 04:57:42.244429 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 04:57:42.244440 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 04:57:42.244451 | orchestrator | 2026-03-28 04:57:42.244462 | orchestrator | 2026-03-28 04:57:42.244473 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:57:42.244483 | orchestrator | Saturday 28 March 2026 04:57:41 +0000 (0:00:01.610) 0:03:26.392 ******** 2026-03-28 04:57:42.244494 | orchestrator | =============================================================================== 2026-03-28 04:57:42.244505 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 91.12s 2026-03-28 04:57:42.244516 | orchestrator | opensearch : Restart opensearch container ------------------------------ 75.93s 2026-03-28 04:57:42.244526 | orchestrator | opensearch : Perform a flush -------------------------------------------- 3.55s 2026-03-28 04:57:42.244537 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.83s 2026-03-28 04:57:42.244548 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.76s 2026-03-28 04:57:42.244558 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.65s 2026-03-28 04:57:42.244569 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.53s 2026-03-28 04:57:42.244609 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2026-03-28 04:57:42.244620 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.39s 2026-03-28 04:57:42.244631 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.27s 2026-03-28 04:57:42.244642 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.24s 2026-03-28 04:57:42.244653 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.06s 2026-03-28 04:57:42.244663 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.61s 2026-03-28 04:57:42.244733 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.45s 2026-03-28 04:57:42.244747 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.25s 2026-03-28 04:57:42.244759 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2026-03-28 04:57:42.244772 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.09s 2026-03-28 04:57:42.244785 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.08s 2026-03-28 04:57:42.244798 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.95s 2026-03-28 04:57:42.244826 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2026-03-28 04:57:42.565981 | orchestrator | + osism apply -a upgrade memcached 2026-03-28 04:57:44.666267 | orchestrator | 2026-03-28 04:57:44 | INFO  | Task 7943c51c-8e52-4cff-b1d0-00431aeecc2c (memcached) was prepared for execution. 2026-03-28 04:57:44.666367 | orchestrator | 2026-03-28 04:57:44 | INFO  | It takes a moment until task 7943c51c-8e52-4cff-b1d0-00431aeecc2c (memcached) has been started and output is visible here. 2026-03-28 04:58:19.302624 | orchestrator | 2026-03-28 04:58:19.302797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:58:19.302817 | orchestrator | 2026-03-28 04:58:19.302828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:58:19.302838 | orchestrator | Saturday 28 March 2026 04:57:50 +0000 (0:00:01.935) 0:00:01.936 ******** 2026-03-28 04:58:19.302848 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:58:19.302859 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:58:19.302869 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:58:19.302879 | orchestrator | 2026-03-28 04:58:19.302889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:58:19.302899 | orchestrator | Saturday 28 March 2026 04:57:52 +0000 (0:00:01.735) 0:00:03.671 ******** 2026-03-28 04:58:19.302909 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-28 04:58:19.302919 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-28 04:58:19.302929 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-28 04:58:19.302939 | orchestrator | 2026-03-28 04:58:19.302948 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-28 04:58:19.302958 | orchestrator | 2026-03-28 04:58:19.302968 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-28 04:58:19.302977 | orchestrator | Saturday 28 March 2026 04:57:54 +0000 (0:00:01.819) 0:00:05.490 ******** 2026-03-28 04:58:19.302987 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:58:19.302997 | orchestrator | 2026-03-28 04:58:19.303007 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-28 04:58:19.303018 | orchestrator | Saturday 28 March 2026 04:57:57 +0000 (0:00:03.200) 0:00:08.690 ******** 2026-03-28 04:58:19.303028 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-28 04:58:19.303038 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-28 04:58:19.303048 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-28 04:58:19.303057 | orchestrator | 2026-03-28 04:58:19.303067 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-28 04:58:19.303100 | orchestrator | Saturday 28 March 2026 04:57:59 +0000 (0:00:01.956) 0:00:10.647 ******** 2026-03-28 04:58:19.303110 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-28 04:58:19.303120 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-28 04:58:19.303130 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-28 04:58:19.303139 | orchestrator | 2026-03-28 04:58:19.303149 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-28 04:58:19.303160 | orchestrator | Saturday 28 March 2026 04:58:02 +0000 (0:00:02.768) 0:00:13.415 ******** 2026-03-28 04:58:19.303176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:58:19.303191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:58:19.303233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 04:58:19.303246 | orchestrator | 2026-03-28 04:58:19.303257 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-28 04:58:19.303269 | orchestrator | Saturday 28 March 2026 04:58:04 +0000 (0:00:02.469) 0:00:15.884 ******** 2026-03-28 04:58:19.303280 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:58:19.303291 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:58:19.303303 | orchestrator | } 2026-03-28 04:58:19.303314 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:58:19.303324 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:58:19.303334 | orchestrator | } 2026-03-28 04:58:19.303343 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:58:19.303353 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:58:19.303363 | orchestrator | } 2026-03-28 04:58:19.303372 | orchestrator | 2026-03-28 04:58:19.303382 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:58:19.303392 | orchestrator | Saturday 28 March 2026 04:58:06 +0000 (0:00:01.449) 0:00:17.333 ******** 2026-03-28 04:58:19.303409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:58:19.303420 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:58:19.303431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:58:19.303441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:58:19.303451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 04:58:19.303461 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:58:19.303471 | orchestrator | 2026-03-28 04:58:19.303480 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-28 04:58:19.303490 | orchestrator | Saturday 28 March 2026 04:58:08 +0000 (0:00:02.138) 0:00:19.472 ******** 2026-03-28 04:58:19.303500 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:58:19.303509 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:58:19.303519 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:58:19.303529 | orchestrator | 2026-03-28 04:58:19.303538 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:58:19.303549 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:58:19.303564 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:58:19.303574 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:58:19.303584 | orchestrator | 2026-03-28 04:58:19.303594 | orchestrator | 2026-03-28 04:58:19.303603 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:58:19.303619 | orchestrator | Saturday 28 March 2026 04:58:19 +0000 (0:00:10.820) 0:00:30.292 ******** 2026-03-28 04:58:19.626804 | orchestrator | =============================================================================== 2026-03-28 04:58:19.626937 | orchestrator | memcached : Restart memcached container -------------------------------- 10.82s 2026-03-28 04:58:19.626958 | orchestrator | memcached : include_tasks ----------------------------------------------- 3.20s 2026-03-28 04:58:19.626968 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.77s 2026-03-28 04:58:19.626977 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.47s 2026-03-28 04:58:19.626986 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.14s 2026-03-28 04:58:19.626995 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.96s 2026-03-28 04:58:19.627004 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.82s 2026-03-28 04:58:19.627013 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.73s 2026-03-28 04:58:19.627022 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.45s 2026-03-28 04:58:19.952281 | orchestrator | + osism apply -a upgrade redis 2026-03-28 04:58:22.025319 | orchestrator | 2026-03-28 04:58:22 | INFO  | Task 6a116390-0346-4e35-b19c-fbd0724e02c7 (redis) was prepared for execution. 2026-03-28 04:58:22.025391 | orchestrator | 2026-03-28 04:58:22 | INFO  | It takes a moment until task 6a116390-0346-4e35-b19c-fbd0724e02c7 (redis) has been started and output is visible here. 2026-03-28 04:58:33.953544 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-28 04:58:33.953687 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-28 04:58:33.953738 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-28 04:58:33.953758 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-28 04:58:33.953869 | orchestrator | 2026-03-28 04:58:33.953887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:58:33.953908 | orchestrator | 2026-03-28 04:58:33.953927 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:58:33.953947 | orchestrator | Saturday 28 March 2026 04:58:27 +0000 (0:00:01.021) 0:00:01.022 ******** 2026-03-28 04:58:33.953966 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:58:33.953987 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:58:33.953999 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:58:33.954010 | orchestrator | 2026-03-28 04:58:33.954096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:58:33.954116 | orchestrator | Saturday 28 March 2026 04:58:28 +0000 (0:00:00.945) 0:00:01.967 ******** 2026-03-28 04:58:33.954135 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-28 04:58:33.954155 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-28 04:58:33.954174 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-28 04:58:33.954187 | orchestrator | 2026-03-28 04:58:33.954200 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-28 04:58:33.954212 | orchestrator | 2026-03-28 04:58:33.954225 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-28 04:58:33.954238 | orchestrator | Saturday 28 March 2026 04:58:29 +0000 (0:00:00.845) 0:00:02.813 ******** 2026-03-28 04:58:33.954251 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:58:33.954264 | orchestrator | 2026-03-28 04:58:33.954278 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-28 04:58:33.954292 | orchestrator | Saturday 28 March 2026 04:58:30 +0000 (0:00:01.201) 0:00:04.015 ******** 2026-03-28 04:58:33.954309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954353 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954367 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954381 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954416 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954470 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954483 | orchestrator | 2026-03-28 04:58:33.954494 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-28 04:58:33.954506 | orchestrator | Saturday 28 March 2026 04:58:31 +0000 (0:00:01.521) 0:00:05.536 ******** 2026-03-28 04:58:33.954526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954543 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954555 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:33.954588 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229407 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229537 | orchestrator | 2026-03-28 04:58:39.229556 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-28 04:58:39.229569 | orchestrator | Saturday 28 March 2026 04:58:33 +0000 (0:00:02.117) 0:00:07.654 ******** 2026-03-28 04:58:39.229583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229610 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229623 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229646 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229676 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229697 | orchestrator | 2026-03-28 04:58:39.229710 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-28 04:58:39.229722 | orchestrator | Saturday 28 March 2026 04:58:36 +0000 (0:00:02.902) 0:00:10.557 ******** 2026-03-28 04:58:39.229734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:58:39.229875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 04:59:02.083143 | orchestrator | 2026-03-28 04:59:02.083258 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-28 04:59:02.083276 | orchestrator | Saturday 28 March 2026 04:58:39 +0000 (0:00:02.374) 0:00:12.932 ******** 2026-03-28 04:59:02.083289 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 04:59:02.083302 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:59:02.083314 | orchestrator | } 2026-03-28 04:59:02.083325 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 04:59:02.083337 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:59:02.083348 | orchestrator | } 2026-03-28 04:59:02.083359 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 04:59:02.083370 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 04:59:02.083381 | orchestrator | } 2026-03-28 04:59:02.083392 | orchestrator | 2026-03-28 04:59:02.083404 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 04:59:02.083415 | orchestrator | Saturday 28 March 2026 04:58:39 +0000 (0:00:00.538) 0:00:13.470 ******** 2026-03-28 04:59:02.083429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083496 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-28 04:59:02.083516 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-28 04:59:02.083554 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:59:02.083567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083613 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:02.083646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 04:59:02.083673 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:02.083687 | orchestrator | 2026-03-28 04:59:02.083700 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 04:59:02.083714 | orchestrator | Saturday 28 March 2026 04:58:40 +0000 (0:00:01.057) 0:00:14.528 ******** 2026-03-28 04:59:02.083727 | orchestrator | 2026-03-28 04:59:02.083740 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 04:59:02.083753 | orchestrator | Saturday 28 March 2026 04:58:40 +0000 (0:00:00.081) 0:00:14.610 ******** 2026-03-28 04:59:02.083766 | orchestrator | 2026-03-28 04:59:02.083779 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 04:59:02.083792 | orchestrator | Saturday 28 March 2026 04:58:40 +0000 (0:00:00.071) 0:00:14.682 ******** 2026-03-28 04:59:02.083838 | orchestrator | 2026-03-28 04:59:02.083854 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-28 04:59:02.083874 | orchestrator | Saturday 28 March 2026 04:58:41 +0000 (0:00:00.074) 0:00:14.756 ******** 2026-03-28 04:59:02.083889 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:59:02.083910 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:59:02.083931 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:59:02.083952 | orchestrator | 2026-03-28 04:59:02.083972 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-28 04:59:02.083993 | orchestrator | Saturday 28 March 2026 04:58:50 +0000 (0:00:09.921) 0:00:24.677 ******** 2026-03-28 04:59:02.084011 | orchestrator | changed: [testbed-node-1] 2026-03-28 04:59:02.084022 | orchestrator | changed: [testbed-node-0] 2026-03-28 04:59:02.084033 | orchestrator | changed: [testbed-node-2] 2026-03-28 04:59:02.084045 | orchestrator | 2026-03-28 04:59:02.084056 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 04:59:02.084068 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:59:02.084081 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:59:02.084093 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 04:59:02.084114 | orchestrator | 2026-03-28 04:59:02.084125 | orchestrator | 2026-03-28 04:59:02.084136 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 04:59:02.084147 | orchestrator | Saturday 28 March 2026 04:59:01 +0000 (0:00:10.660) 0:00:35.338 ******** 2026-03-28 04:59:02.084158 | orchestrator | =============================================================================== 2026-03-28 04:59:02.084169 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.66s 2026-03-28 04:59:02.084180 | orchestrator | redis : Restart redis container ----------------------------------------- 9.92s 2026-03-28 04:59:02.084191 | orchestrator | redis : Copying over redis config files --------------------------------- 2.90s 2026-03-28 04:59:02.084202 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.37s 2026-03-28 04:59:02.084213 | orchestrator | redis : Copying over default config.json files -------------------------- 2.12s 2026-03-28 04:59:02.084224 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.52s 2026-03-28 04:59:02.084235 | orchestrator | redis : include_tasks --------------------------------------------------- 1.20s 2026-03-28 04:59:02.084246 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.06s 2026-03-28 04:59:02.084257 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-03-28 04:59:02.084268 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-03-28 04:59:02.084279 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.54s 2026-03-28 04:59:02.084290 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-03-28 04:59:02.426349 | orchestrator | + osism apply -a upgrade mariadb 2026-03-28 04:59:04.484140 | orchestrator | 2026-03-28 04:59:04 | INFO  | Task 9b4e9413-db39-43ff-a441-ecb628daa29b (mariadb) was prepared for execution. 2026-03-28 04:59:04.484240 | orchestrator | 2026-03-28 04:59:04 | INFO  | It takes a moment until task 9b4e9413-db39-43ff-a441-ecb628daa29b (mariadb) has been started and output is visible here. 2026-03-28 04:59:29.356955 | orchestrator | 2026-03-28 04:59:29.357075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 04:59:29.357092 | orchestrator | 2026-03-28 04:59:29.357105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 04:59:29.357117 | orchestrator | Saturday 28 March 2026 04:59:10 +0000 (0:00:01.506) 0:00:01.506 ******** 2026-03-28 04:59:29.357129 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:59:29.357141 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:59:29.357152 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:59:29.357163 | orchestrator | 2026-03-28 04:59:29.357174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 04:59:29.357185 | orchestrator | Saturday 28 March 2026 04:59:12 +0000 (0:00:01.737) 0:00:03.244 ******** 2026-03-28 04:59:29.357197 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 04:59:29.357208 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 04:59:29.357219 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 04:59:29.357230 | orchestrator | 2026-03-28 04:59:29.357241 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 04:59:29.357252 | orchestrator | 2026-03-28 04:59:29.357263 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 04:59:29.357274 | orchestrator | Saturday 28 March 2026 04:59:14 +0000 (0:00:01.854) 0:00:05.099 ******** 2026-03-28 04:59:29.357285 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 04:59:29.357297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 04:59:29.357309 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 04:59:29.357320 | orchestrator | 2026-03-28 04:59:29.357356 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 04:59:29.357368 | orchestrator | Saturday 28 March 2026 04:59:15 +0000 (0:00:01.432) 0:00:06.531 ******** 2026-03-28 04:59:29.357379 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:59:29.357391 | orchestrator | 2026-03-28 04:59:29.357402 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-28 04:59:29.357413 | orchestrator | Saturday 28 March 2026 04:59:17 +0000 (0:00:01.822) 0:00:08.354 ******** 2026-03-28 04:59:29.357446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:29.357485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:29.357516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:29.357531 | orchestrator | 2026-03-28 04:59:29.357543 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-28 04:59:29.357556 | orchestrator | Saturday 28 March 2026 04:59:21 +0000 (0:00:03.808) 0:00:12.162 ******** 2026-03-28 04:59:29.357568 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:29.357582 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:29.357594 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:59:29.357607 | orchestrator | 2026-03-28 04:59:29.357620 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-28 04:59:29.357633 | orchestrator | Saturday 28 March 2026 04:59:22 +0000 (0:00:01.549) 0:00:13.711 ******** 2026-03-28 04:59:29.357645 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:29.357657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:29.357670 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:59:29.357683 | orchestrator | 2026-03-28 04:59:29.357696 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-28 04:59:29.357707 | orchestrator | Saturday 28 March 2026 04:59:24 +0000 (0:00:02.220) 0:00:15.932 ******** 2026-03-28 04:59:29.357728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:41.671703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:41.671829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:41.671936 | orchestrator | 2026-03-28 04:59:41.671954 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-28 04:59:41.671968 | orchestrator | Saturday 28 March 2026 04:59:29 +0000 (0:00:04.488) 0:00:20.421 ******** 2026-03-28 04:59:41.671980 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:41.671993 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:41.672004 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:59:41.672017 | orchestrator | 2026-03-28 04:59:41.672028 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-28 04:59:41.672058 | orchestrator | Saturday 28 March 2026 04:59:31 +0000 (0:00:02.065) 0:00:22.486 ******** 2026-03-28 04:59:41.672070 | orchestrator | ok: [testbed-node-0] 2026-03-28 04:59:41.672081 | orchestrator | ok: [testbed-node-1] 2026-03-28 04:59:41.672092 | orchestrator | ok: [testbed-node-2] 2026-03-28 04:59:41.672103 | orchestrator | 2026-03-28 04:59:41.672122 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 04:59:41.672133 | orchestrator | Saturday 28 March 2026 04:59:36 +0000 (0:00:04.845) 0:00:27.332 ******** 2026-03-28 04:59:41.672145 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 04:59:41.672156 | orchestrator | 2026-03-28 04:59:41.672168 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 04:59:41.672179 | orchestrator | Saturday 28 March 2026 04:59:38 +0000 (0:00:01.867) 0:00:29.200 ******** 2026-03-28 04:59:41.672191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:41.672204 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:59:41.672238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:48.860544 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:48.860638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:48.860652 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:48.860660 | orchestrator | 2026-03-28 04:59:48.860668 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 04:59:48.860676 | orchestrator | Saturday 28 March 2026 04:59:41 +0000 (0:00:03.532) 0:00:32.732 ******** 2026-03-28 04:59:48.860703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:48.860712 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:59:48.860745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:48.860755 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:48.860763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:48.860776 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:48.860783 | orchestrator | 2026-03-28 04:59:48.860791 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 04:59:48.860798 | orchestrator | Saturday 28 March 2026 04:59:44 +0000 (0:00:03.322) 0:00:36.055 ******** 2026-03-28 04:59:48.860816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:53.103319 | orchestrator | skipping: [testbed-node-0] 2026-03-28 04:59:53.103426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:53.103472 | orchestrator | skipping: [testbed-node-1] 2026-03-28 04:59:53.103503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 04:59:53.103517 | orchestrator | skipping: [testbed-node-2] 2026-03-28 04:59:53.103529 | orchestrator | 2026-03-28 04:59:53.103541 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-28 04:59:53.103553 | orchestrator | Saturday 28 March 2026 04:59:48 +0000 (0:00:03.866) 0:00:39.921 ******** 2026-03-28 04:59:53.103585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:53.103614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 04:59:53.103638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 05:00:08.446589 | orchestrator | 2026-03-28 05:00:08.446691 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-28 05:00:08.446709 | orchestrator | Saturday 28 March 2026 04:59:53 +0000 (0:00:04.249) 0:00:44.171 ******** 2026-03-28 05:00:08.446722 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:00:08.446735 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:00:08.446748 | orchestrator | } 2026-03-28 05:00:08.446760 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:00:08.446772 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:00:08.446784 | orchestrator | } 2026-03-28 05:00:08.446795 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:00:08.446807 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:00:08.446819 | orchestrator | } 2026-03-28 05:00:08.446830 | orchestrator | 2026-03-28 05:00:08.446847 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 05:00:08.446868 | orchestrator | Saturday 28 March 2026 04:59:54 +0000 (0:00:01.378) 0:00:45.550 ******** 2026-03-28 05:00:08.446945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:08.446995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:08.447063 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:08.447124 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447143 | orchestrator | 2026-03-28 05:00:08.447162 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-28 05:00:08.447182 | orchestrator | Saturday 28 March 2026 04:59:58 +0000 (0:00:04.064) 0:00:49.615 ******** 2026-03-28 05:00:08.447201 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447222 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447240 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447259 | orchestrator | 2026-03-28 05:00:08.447277 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-28 05:00:08.447295 | orchestrator | Saturday 28 March 2026 04:59:59 +0000 (0:00:01.404) 0:00:51.019 ******** 2026-03-28 05:00:08.447314 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447336 | orchestrator | 2026-03-28 05:00:08.447356 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-28 05:00:08.447375 | orchestrator | Saturday 28 March 2026 05:00:01 +0000 (0:00:01.119) 0:00:52.139 ******** 2026-03-28 05:00:08.447396 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447411 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447424 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447438 | orchestrator | 2026-03-28 05:00:08.447451 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-28 05:00:08.447465 | orchestrator | Saturday 28 March 2026 05:00:02 +0000 (0:00:01.375) 0:00:53.515 ******** 2026-03-28 05:00:08.447478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447491 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447517 | orchestrator | 2026-03-28 05:00:08.447530 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-28 05:00:08.447541 | orchestrator | Saturday 28 March 2026 05:00:04 +0000 (0:00:01.724) 0:00:55.239 ******** 2026-03-28 05:00:08.447551 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447562 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447584 | orchestrator | 2026-03-28 05:00:08.447595 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-28 05:00:08.447606 | orchestrator | Saturday 28 March 2026 05:00:05 +0000 (0:00:01.484) 0:00:56.723 ******** 2026-03-28 05:00:08.447617 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447627 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447638 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447649 | orchestrator | 2026-03-28 05:00:08.447660 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-28 05:00:08.447671 | orchestrator | Saturday 28 March 2026 05:00:07 +0000 (0:00:01.383) 0:00:58.107 ******** 2026-03-28 05:00:08.447681 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:08.447692 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:08.447703 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:08.447714 | orchestrator | 2026-03-28 05:00:08.447737 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-28 05:00:26.350975 | orchestrator | Saturday 28 March 2026 05:00:08 +0000 (0:00:01.400) 0:00:59.508 ******** 2026-03-28 05:00:26.351093 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351110 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351121 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351133 | orchestrator | 2026-03-28 05:00:26.351145 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-28 05:00:26.351157 | orchestrator | Saturday 28 March 2026 05:00:09 +0000 (0:00:01.561) 0:01:01.069 ******** 2026-03-28 05:00:26.351168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:00:26.351180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:00:26.351191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:00:26.351224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351236 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:00:26.351247 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:00:26.351258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:00:26.351269 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351280 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:00:26.351291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:00:26.351302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:00:26.351313 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351324 | orchestrator | 2026-03-28 05:00:26.351335 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-28 05:00:26.351361 | orchestrator | Saturday 28 March 2026 05:00:11 +0000 (0:00:01.326) 0:01:02.395 ******** 2026-03-28 05:00:26.351372 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351384 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351395 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351406 | orchestrator | 2026-03-28 05:00:26.351417 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-28 05:00:26.351429 | orchestrator | Saturday 28 March 2026 05:00:12 +0000 (0:00:01.316) 0:01:03.712 ******** 2026-03-28 05:00:26.351440 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351451 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351462 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351476 | orchestrator | 2026-03-28 05:00:26.351489 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-28 05:00:26.351502 | orchestrator | Saturday 28 March 2026 05:00:13 +0000 (0:00:01.280) 0:01:04.993 ******** 2026-03-28 05:00:26.351515 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351527 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351540 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351553 | orchestrator | 2026-03-28 05:00:26.351566 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-28 05:00:26.351579 | orchestrator | Saturday 28 March 2026 05:00:15 +0000 (0:00:01.363) 0:01:06.356 ******** 2026-03-28 05:00:26.351592 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351605 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351618 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351630 | orchestrator | 2026-03-28 05:00:26.351644 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-28 05:00:26.351656 | orchestrator | Saturday 28 March 2026 05:00:16 +0000 (0:00:01.343) 0:01:07.700 ******** 2026-03-28 05:00:26.351669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351695 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351707 | orchestrator | 2026-03-28 05:00:26.351720 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-28 05:00:26.351733 | orchestrator | Saturday 28 March 2026 05:00:18 +0000 (0:00:01.490) 0:01:09.190 ******** 2026-03-28 05:00:26.351746 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351758 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351772 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351785 | orchestrator | 2026-03-28 05:00:26.351798 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-28 05:00:26.351811 | orchestrator | Saturday 28 March 2026 05:00:19 +0000 (0:00:01.565) 0:01:10.756 ******** 2026-03-28 05:00:26.351825 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351838 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351849 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351860 | orchestrator | 2026-03-28 05:00:26.351871 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-28 05:00:26.351882 | orchestrator | Saturday 28 March 2026 05:00:21 +0000 (0:00:01.428) 0:01:12.185 ******** 2026-03-28 05:00:26.351900 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.351911 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.351922 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:26.351957 | orchestrator | 2026-03-28 05:00:26.351968 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-28 05:00:26.351980 | orchestrator | Saturday 28 March 2026 05:00:22 +0000 (0:00:01.415) 0:01:13.600 ******** 2026-03-28 05:00:26.352017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:26.352039 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:26.352053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:26.352072 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:26.352092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:44.144871 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145071 | orchestrator | 2026-03-28 05:00:44.145105 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-28 05:00:44.145127 | orchestrator | Saturday 28 March 2026 05:00:26 +0000 (0:00:03.809) 0:01:17.409 ******** 2026-03-28 05:00:44.145148 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145186 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145204 | orchestrator | 2026-03-28 05:00:44.145244 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-28 05:00:44.145265 | orchestrator | Saturday 28 March 2026 05:00:28 +0000 (0:00:01.750) 0:01:19.159 ******** 2026-03-28 05:00:44.145291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:44.145348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:44.145421 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 05:00:44.145483 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145501 | orchestrator | 2026-03-28 05:00:44.145519 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-28 05:00:44.145538 | orchestrator | Saturday 28 March 2026 05:00:31 +0000 (0:00:03.589) 0:01:22.749 ******** 2026-03-28 05:00:44.145557 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145575 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145592 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145610 | orchestrator | 2026-03-28 05:00:44.145628 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-28 05:00:44.145646 | orchestrator | Saturday 28 March 2026 05:00:33 +0000 (0:00:01.767) 0:01:24.517 ******** 2026-03-28 05:00:44.145664 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145700 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145717 | orchestrator | 2026-03-28 05:00:44.145734 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-28 05:00:44.145753 | orchestrator | Saturday 28 March 2026 05:00:34 +0000 (0:00:01.450) 0:01:25.967 ******** 2026-03-28 05:00:44.145772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145790 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145809 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145828 | orchestrator | 2026-03-28 05:00:44.145848 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-28 05:00:44.145869 | orchestrator | Saturday 28 March 2026 05:00:36 +0000 (0:00:01.587) 0:01:27.555 ******** 2026-03-28 05:00:44.145888 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.145907 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.145927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.145947 | orchestrator | 2026-03-28 05:00:44.146014 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 05:00:44.146116 | orchestrator | Saturday 28 March 2026 05:00:38 +0000 (0:00:01.804) 0:01:29.360 ******** 2026-03-28 05:00:44.146136 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:00:44.146155 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:00:44.146173 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:00:44.146192 | orchestrator | 2026-03-28 05:00:44.146211 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-28 05:00:44.146230 | orchestrator | Saturday 28 March 2026 05:00:40 +0000 (0:00:02.126) 0:01:31.487 ******** 2026-03-28 05:00:44.146248 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:00:44.146268 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:00:44.146287 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:00:44.146306 | orchestrator | 2026-03-28 05:00:44.146324 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-28 05:00:44.146343 | orchestrator | Saturday 28 March 2026 05:00:42 +0000 (0:00:01.976) 0:01:33.463 ******** 2026-03-28 05:00:44.146363 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:00:44.146381 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:00:44.146399 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:00:44.146417 | orchestrator | 2026-03-28 05:00:44.146436 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-28 05:00:44.146455 | orchestrator | Saturday 28 March 2026 05:00:43 +0000 (0:00:01.488) 0:01:34.952 ******** 2026-03-28 05:00:44.146497 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506281 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506403 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506446 | orchestrator | 2026-03-28 05:03:27.506461 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-28 05:03:27.506474 | orchestrator | Saturday 28 March 2026 05:00:45 +0000 (0:00:01.965) 0:01:36.918 ******** 2026-03-28 05:03:27.506486 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506497 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506508 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506519 | orchestrator | 2026-03-28 05:03:27.506545 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-28 05:03:27.506557 | orchestrator | Saturday 28 March 2026 05:00:48 +0000 (0:00:02.354) 0:01:39.272 ******** 2026-03-28 05:03:27.506568 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506579 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506589 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506600 | orchestrator | 2026-03-28 05:03:27.506611 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-28 05:03:27.506622 | orchestrator | Saturday 28 March 2026 05:00:49 +0000 (0:00:01.551) 0:01:40.824 ******** 2026-03-28 05:03:27.506633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.506645 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.506655 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.506666 | orchestrator | 2026-03-28 05:03:27.506677 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-28 05:03:27.506688 | orchestrator | Saturday 28 March 2026 05:00:51 +0000 (0:00:01.409) 0:01:42.233 ******** 2026-03-28 05:03:27.506699 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506710 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506721 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506732 | orchestrator | 2026-03-28 05:03:27.506743 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-28 05:03:27.506755 | orchestrator | Saturday 28 March 2026 05:00:54 +0000 (0:00:03.603) 0:01:45.837 ******** 2026-03-28 05:03:27.506769 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506781 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506794 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506806 | orchestrator | 2026-03-28 05:03:27.506819 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-28 05:03:27.506831 | orchestrator | Saturday 28 March 2026 05:00:56 +0000 (0:00:01.530) 0:01:47.368 ******** 2026-03-28 05:03:27.506844 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.506857 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.506870 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.506883 | orchestrator | 2026-03-28 05:03:27.506896 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-28 05:03:27.506910 | orchestrator | Saturday 28 March 2026 05:00:57 +0000 (0:00:01.507) 0:01:48.876 ******** 2026-03-28 05:03:27.506923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.506935 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.506947 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.506959 | orchestrator | 2026-03-28 05:03:27.506972 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 05:03:27.506985 | orchestrator | Saturday 28 March 2026 05:00:59 +0000 (0:00:01.808) 0:01:50.684 ******** 2026-03-28 05:03:27.506998 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.507011 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.507023 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.507036 | orchestrator | 2026-03-28 05:03:27.507048 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 05:03:27.507061 | orchestrator | Saturday 28 March 2026 05:01:01 +0000 (0:00:01.616) 0:01:52.301 ******** 2026-03-28 05:03:27.507073 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.507086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.507099 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.507111 | orchestrator | 2026-03-28 05:03:27.507123 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-28 05:03:27.507143 | orchestrator | Saturday 28 March 2026 05:01:02 +0000 (0:00:01.641) 0:01:53.943 ******** 2026-03-28 05:03:27.507154 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:03:27.507165 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:03:27.507197 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:03:27.507209 | orchestrator | 2026-03-28 05:03:27.507220 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-28 05:03:27.507230 | orchestrator | Saturday 28 March 2026 05:01:04 +0000 (0:00:01.694) 0:01:55.637 ******** 2026-03-28 05:03:27.507241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.507252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.507263 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.507273 | orchestrator | 2026-03-28 05:03:27.507284 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 05:03:27.507295 | orchestrator | 2026-03-28 05:03:27.507306 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 05:03:27.507316 | orchestrator | Saturday 28 March 2026 05:01:06 +0000 (0:00:01.830) 0:01:57.467 ******** 2026-03-28 05:03:27.507327 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:03:27.507338 | orchestrator | 2026-03-28 05:03:27.507349 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 05:03:27.507360 | orchestrator | Saturday 28 March 2026 05:01:32 +0000 (0:00:26.256) 0:02:23.724 ******** 2026-03-28 05:03:27.507371 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.507381 | orchestrator | 2026-03-28 05:03:27.507392 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 05:03:27.507403 | orchestrator | Saturday 28 March 2026 05:01:37 +0000 (0:00:04.657) 0:02:28.381 ******** 2026-03-28 05:03:27.507414 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.507424 | orchestrator | 2026-03-28 05:03:27.507435 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 05:03:27.507446 | orchestrator | 2026-03-28 05:03:27.507457 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 05:03:27.507467 | orchestrator | Saturday 28 March 2026 05:01:40 +0000 (0:00:02.926) 0:02:31.308 ******** 2026-03-28 05:03:27.507487 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:03:27.507505 | orchestrator | 2026-03-28 05:03:27.507525 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 05:03:27.507566 | orchestrator | Saturday 28 March 2026 05:02:06 +0000 (0:00:26.767) 0:02:58.076 ******** 2026-03-28 05:03:27.507583 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-03-28 05:03:27.507603 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.507621 | orchestrator | 2026-03-28 05:03:27.507639 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 05:03:27.507666 | orchestrator | Saturday 28 March 2026 05:02:15 +0000 (0:00:08.117) 0:03:06.194 ******** 2026-03-28 05:03:27.507685 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.507703 | orchestrator | 2026-03-28 05:03:27.507720 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 05:03:27.507737 | orchestrator | 2026-03-28 05:03:27.507756 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 05:03:27.507774 | orchestrator | Saturday 28 March 2026 05:02:18 +0000 (0:00:03.500) 0:03:09.694 ******** 2026-03-28 05:03:27.507793 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:03:27.507811 | orchestrator | 2026-03-28 05:03:27.507831 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 05:03:27.507849 | orchestrator | Saturday 28 March 2026 05:02:44 +0000 (0:00:26.335) 0:03:36.030 ******** 2026-03-28 05:03:27.507865 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-03-28 05:03:27.507876 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.507887 | orchestrator | 2026-03-28 05:03:27.507897 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 05:03:27.507918 | orchestrator | Saturday 28 March 2026 05:02:52 +0000 (0:00:07.983) 0:03:44.013 ******** 2026-03-28 05:03:27.507929 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-28 05:03:27.507940 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 05:03:27.507951 | orchestrator | mariadb_bootstrap_restart 2026-03-28 05:03:27.507962 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.507973 | orchestrator | 2026-03-28 05:03:27.507984 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 05:03:27.507994 | orchestrator | skipping: no hosts matched 2026-03-28 05:03:27.508005 | orchestrator | 2026-03-28 05:03:27.508016 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 05:03:27.508027 | orchestrator | skipping: no hosts matched 2026-03-28 05:03:27.508038 | orchestrator | 2026-03-28 05:03:27.508048 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 05:03:27.508059 | orchestrator | 2026-03-28 05:03:27.508070 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 05:03:27.508081 | orchestrator | Saturday 28 March 2026 05:02:57 +0000 (0:00:04.179) 0:03:48.193 ******** 2026-03-28 05:03:27.508092 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:03:27.508102 | orchestrator | 2026-03-28 05:03:27.508113 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-28 05:03:27.508124 | orchestrator | Saturday 28 March 2026 05:02:59 +0000 (0:00:01.996) 0:03:50.189 ******** 2026-03-28 05:03:27.508135 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508146 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508157 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.508168 | orchestrator | 2026-03-28 05:03:27.508200 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-28 05:03:27.508211 | orchestrator | Saturday 28 March 2026 05:03:02 +0000 (0:00:03.224) 0:03:53.414 ******** 2026-03-28 05:03:27.508222 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508233 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508244 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:03:27.508255 | orchestrator | 2026-03-28 05:03:27.508266 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-28 05:03:27.508277 | orchestrator | Saturday 28 March 2026 05:03:05 +0000 (0:00:03.171) 0:03:56.586 ******** 2026-03-28 05:03:27.508288 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508299 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508309 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.508320 | orchestrator | 2026-03-28 05:03:27.508331 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-28 05:03:27.508342 | orchestrator | Saturday 28 March 2026 05:03:08 +0000 (0:00:03.057) 0:03:59.643 ******** 2026-03-28 05:03:27.508353 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508375 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:03:27.508386 | orchestrator | 2026-03-28 05:03:27.508396 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-28 05:03:27.508407 | orchestrator | Saturday 28 March 2026 05:03:12 +0000 (0:00:03.548) 0:04:03.191 ******** 2026-03-28 05:03:27.508418 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.508429 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.508440 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.508451 | orchestrator | 2026-03-28 05:03:27.508462 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-28 05:03:27.508473 | orchestrator | Saturday 28 March 2026 05:03:18 +0000 (0:00:06.723) 0:04:09.915 ******** 2026-03-28 05:03:27.508483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.508494 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508505 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508526 | orchestrator | 2026-03-28 05:03:27.508537 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-28 05:03:27.508548 | orchestrator | Saturday 28 March 2026 05:03:22 +0000 (0:00:03.619) 0:04:13.535 ******** 2026-03-28 05:03:27.508559 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:03:27.508570 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:03:27.508580 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:03:27.508591 | orchestrator | 2026-03-28 05:03:27.508602 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 05:03:27.508613 | orchestrator | Saturday 28 March 2026 05:03:24 +0000 (0:00:01.645) 0:04:15.180 ******** 2026-03-28 05:03:27.508624 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:03:27.508635 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:03:27.508646 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:03:27.508657 | orchestrator | 2026-03-28 05:03:27.508677 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 05:03:49.291175 | orchestrator | Saturday 28 March 2026 05:03:27 +0000 (0:00:03.385) 0:04:18.566 ******** 2026-03-28 05:03:49.291277 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:03:49.291284 | orchestrator | 2026-03-28 05:03:49.291289 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-28 05:03:49.291306 | orchestrator | Saturday 28 March 2026 05:03:29 +0000 (0:00:02.051) 0:04:20.618 ******** 2026-03-28 05:03:49.291310 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:03:49.291315 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:03:49.291319 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:03:49.291323 | orchestrator | 2026-03-28 05:03:49.291327 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 05:03:49.291332 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 05:03:49.291337 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 05:03:49.291341 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 05:03:49.291345 | orchestrator | 2026-03-28 05:03:49.291349 | orchestrator | 2026-03-28 05:03:49.291353 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 05:03:49.291357 | orchestrator | Saturday 28 March 2026 05:03:48 +0000 (0:00:19.145) 0:04:39.764 ******** 2026-03-28 05:03:49.291361 | orchestrator | =============================================================================== 2026-03-28 05:03:49.291364 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 79.36s 2026-03-28 05:03:49.291368 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 20.76s 2026-03-28 05:03:49.291372 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.15s 2026-03-28 05:03:49.291376 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.61s 2026-03-28 05:03:49.291380 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.72s 2026-03-28 05:03:49.291384 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.85s 2026-03-28 05:03:49.291387 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.49s 2026-03-28 05:03:49.291391 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.25s 2026-03-28 05:03:49.291395 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.07s 2026-03-28 05:03:49.291399 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.87s 2026-03-28 05:03:49.291403 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.81s 2026-03-28 05:03:49.291407 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.81s 2026-03-28 05:03:49.291426 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.62s 2026-03-28 05:03:49.291430 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.60s 2026-03-28 05:03:49.291434 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.59s 2026-03-28 05:03:49.291438 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.55s 2026-03-28 05:03:49.291441 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.53s 2026-03-28 05:03:49.291445 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.39s 2026-03-28 05:03:49.291449 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.32s 2026-03-28 05:03:49.291453 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.22s 2026-03-28 05:03:49.629769 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-28 05:03:51.722843 | orchestrator | 2026-03-28 05:03:51 | INFO  | Task 77145034-8a45-47f7-8fc7-652cb58284a5 (rabbitmq) was prepared for execution. 2026-03-28 05:03:51.722947 | orchestrator | 2026-03-28 05:03:51 | INFO  | It takes a moment until task 77145034-8a45-47f7-8fc7-652cb58284a5 (rabbitmq) has been started and output is visible here. 2026-03-28 05:04:23.941987 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-28 05:04:23.942106 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-28 05:04:23.942121 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-28 05:04:23.942125 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-28 05:04:23.942134 | orchestrator | 2026-03-28 05:04:23.942139 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 05:04:23.942143 | orchestrator | 2026-03-28 05:04:23.942147 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 05:04:23.942151 | orchestrator | Saturday 28 March 2026 05:03:57 +0000 (0:00:01.358) 0:00:01.358 ******** 2026-03-28 05:04:23.942155 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:23.942160 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:04:23.942164 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:04:23.942168 | orchestrator | 2026-03-28 05:04:23.942172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 05:04:23.942176 | orchestrator | Saturday 28 March 2026 05:03:58 +0000 (0:00:01.100) 0:00:02.459 ******** 2026-03-28 05:04:23.942180 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-28 05:04:23.942184 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-28 05:04:23.942200 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-28 05:04:23.942204 | orchestrator | 2026-03-28 05:04:23.942208 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-28 05:04:23.942212 | orchestrator | 2026-03-28 05:04:23.942215 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 05:04:23.942219 | orchestrator | Saturday 28 March 2026 05:03:59 +0000 (0:00:01.220) 0:00:03.679 ******** 2026-03-28 05:04:23.942223 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:04:23.942228 | orchestrator | 2026-03-28 05:04:23.942232 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 05:04:23.942268 | orchestrator | Saturday 28 March 2026 05:04:02 +0000 (0:00:02.345) 0:00:06.025 ******** 2026-03-28 05:04:23.942272 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:23.942276 | orchestrator | 2026-03-28 05:04:23.942280 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-28 05:04:23.942284 | orchestrator | Saturday 28 March 2026 05:04:03 +0000 (0:00:01.366) 0:00:07.392 ******** 2026-03-28 05:04:23.942302 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:23.942306 | orchestrator | 2026-03-28 05:04:23.942310 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-28 05:04:23.942313 | orchestrator | Saturday 28 March 2026 05:04:05 +0000 (0:00:02.316) 0:00:09.708 ******** 2026-03-28 05:04:23.942318 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:04:23.942321 | orchestrator | 2026-03-28 05:04:23.942326 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-28 05:04:23.942329 | orchestrator | Saturday 28 March 2026 05:04:15 +0000 (0:00:09.155) 0:00:18.864 ******** 2026-03-28 05:04:23.942333 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 05:04:23.942337 | orchestrator |  "changed": false, 2026-03-28 05:04:23.942341 | orchestrator |  "msg": "All assertions passed" 2026-03-28 05:04:23.942345 | orchestrator | } 2026-03-28 05:04:23.942349 | orchestrator | 2026-03-28 05:04:23.942353 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-28 05:04:23.942357 | orchestrator | Saturday 28 March 2026 05:04:15 +0000 (0:00:00.358) 0:00:19.222 ******** 2026-03-28 05:04:23.942361 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 05:04:23.942365 | orchestrator |  "changed": false, 2026-03-28 05:04:23.942368 | orchestrator |  "msg": "All assertions passed" 2026-03-28 05:04:23.942372 | orchestrator | } 2026-03-28 05:04:23.942376 | orchestrator | 2026-03-28 05:04:23.942380 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 05:04:23.942384 | orchestrator | Saturday 28 March 2026 05:04:16 +0000 (0:00:00.709) 0:00:19.932 ******** 2026-03-28 05:04:23.942388 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:04:23.942391 | orchestrator | 2026-03-28 05:04:23.942395 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 05:04:23.942399 | orchestrator | Saturday 28 March 2026 05:04:17 +0000 (0:00:01.021) 0:00:20.954 ******** 2026-03-28 05:04:23.942403 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:23.942407 | orchestrator | 2026-03-28 05:04:23.942411 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-28 05:04:23.942415 | orchestrator | Saturday 28 March 2026 05:04:18 +0000 (0:00:01.311) 0:00:22.265 ******** 2026-03-28 05:04:23.942418 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:23.942422 | orchestrator | 2026-03-28 05:04:23.942426 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-28 05:04:23.942430 | orchestrator | Saturday 28 March 2026 05:04:20 +0000 (0:00:01.966) 0:00:24.232 ******** 2026-03-28 05:04:23.942434 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:04:23.942438 | orchestrator | 2026-03-28 05:04:23.942442 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-28 05:04:23.942445 | orchestrator | Saturday 28 March 2026 05:04:21 +0000 (0:00:01.200) 0:00:25.433 ******** 2026-03-28 05:04:23.942465 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:23.942479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:23.942484 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:23.942488 | orchestrator | 2026-03-28 05:04:23.942492 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-28 05:04:23.942496 | orchestrator | Saturday 28 March 2026 05:04:22 +0000 (0:00:00.856) 0:00:26.289 ******** 2026-03-28 05:04:23.942503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:35.820126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:35.820314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:35.820336 | orchestrator | 2026-03-28 05:04:35.820351 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-28 05:04:35.820364 | orchestrator | Saturday 28 March 2026 05:04:23 +0000 (0:00:01.432) 0:00:27.722 ******** 2026-03-28 05:04:35.820376 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 05:04:35.820387 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 05:04:35.820398 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 05:04:35.820409 | orchestrator | 2026-03-28 05:04:35.820420 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-28 05:04:35.820431 | orchestrator | Saturday 28 March 2026 05:04:25 +0000 (0:00:01.461) 0:00:29.183 ******** 2026-03-28 05:04:35.820442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 05:04:35.820452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 05:04:35.820464 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 05:04:35.820475 | orchestrator | 2026-03-28 05:04:35.820486 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-28 05:04:35.820497 | orchestrator | Saturday 28 March 2026 05:04:27 +0000 (0:00:02.163) 0:00:31.347 ******** 2026-03-28 05:04:35.820508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 05:04:35.820518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 05:04:35.820529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 05:04:35.820540 | orchestrator | 2026-03-28 05:04:35.820551 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-28 05:04:35.820562 | orchestrator | Saturday 28 March 2026 05:04:28 +0000 (0:00:01.402) 0:00:32.750 ******** 2026-03-28 05:04:35.820572 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 05:04:35.820583 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 05:04:35.820603 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 05:04:35.820614 | orchestrator | 2026-03-28 05:04:35.820625 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-28 05:04:35.820654 | orchestrator | Saturday 28 March 2026 05:04:30 +0000 (0:00:01.408) 0:00:34.159 ******** 2026-03-28 05:04:35.820667 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 05:04:35.820680 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 05:04:35.820692 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 05:04:35.820704 | orchestrator | 2026-03-28 05:04:35.820717 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-28 05:04:35.820729 | orchestrator | Saturday 28 March 2026 05:04:31 +0000 (0:00:01.264) 0:00:35.423 ******** 2026-03-28 05:04:35.820741 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 05:04:35.820754 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 05:04:35.820766 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 05:04:35.820778 | orchestrator | 2026-03-28 05:04:35.820791 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 05:04:35.820804 | orchestrator | Saturday 28 March 2026 05:04:33 +0000 (0:00:01.597) 0:00:37.021 ******** 2026-03-28 05:04:35.820817 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:04:35.820829 | orchestrator | 2026-03-28 05:04:35.820842 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-28 05:04:35.820861 | orchestrator | Saturday 28 March 2026 05:04:34 +0000 (0:00:01.018) 0:00:38.039 ******** 2026-03-28 05:04:35.820876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:35.820892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:35.820923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:04:41.470885 | orchestrator | 2026-03-28 05:04:41.470999 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-28 05:04:41.471102 | orchestrator | Saturday 28 March 2026 05:04:35 +0000 (0:00:01.553) 0:00:39.593 ******** 2026-03-28 05:04:41.471142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471160 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:04:41.471174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471186 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:04:41.471198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471233 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:04:41.471245 | orchestrator | 2026-03-28 05:04:41.471359 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-28 05:04:41.471375 | orchestrator | Saturday 28 March 2026 05:04:36 +0000 (0:00:00.481) 0:00:40.075 ******** 2026-03-28 05:04:41.471424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:04:41.471615 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:04:41.471628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:04:41.471651 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:04:41.471663 | orchestrator | 2026-03-28 05:04:41.471674 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 05:04:41.471685 | orchestrator | Saturday 28 March 2026 05:04:37 +0000 (0:00:01.083) 0:00:41.158 ******** 2026-03-28 05:04:41.471697 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:04:41.471708 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:04:41.471719 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:04:41.471730 | orchestrator | 2026-03-28 05:04:41.471741 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-28 05:04:41.471752 | orchestrator | Saturday 28 March 2026 05:04:40 +0000 (0:00:02.808) 0:00:43.967 ******** 2026-03-28 05:04:41.471778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:05:32.536018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:05:32.536129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 05:05:32.536167 | orchestrator | 2026-03-28 05:05:32.536180 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-28 05:05:32.536191 | orchestrator | Saturday 28 March 2026 05:04:41 +0000 (0:00:01.291) 0:00:45.258 ******** 2026-03-28 05:05:32.536202 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:05:32.536213 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:05:32.536223 | orchestrator | } 2026-03-28 05:05:32.536233 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:05:32.536243 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:05:32.536252 | orchestrator | } 2026-03-28 05:05:32.536262 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:05:32.536272 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:05:32.536281 | orchestrator | } 2026-03-28 05:05:32.536291 | orchestrator | 2026-03-28 05:05:32.536301 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 05:05:32.536361 | orchestrator | Saturday 28 March 2026 05:04:41 +0000 (0:00:00.430) 0:00:45.689 ******** 2026-03-28 05:05:32.536372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:05:32.536408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:05:32.536421 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-28 05:05:32.536431 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-28 05:05:32.536459 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:05:32.536470 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:05:32.536481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 05:05:32.536492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:05:32.536501 | orchestrator | 2026-03-28 05:05:32.536511 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-28 05:05:32.536521 | orchestrator | Saturday 28 March 2026 05:04:43 +0000 (0:00:01.390) 0:00:47.079 ******** 2026-03-28 05:05:32.536530 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:05:32.536542 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:05:32.536553 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:05:32.536564 | orchestrator | 2026-03-28 05:05:32.536575 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 05:05:32.536587 | orchestrator | 2026-03-28 05:05:32.536599 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 05:05:32.536611 | orchestrator | Saturday 28 March 2026 05:04:44 +0000 (0:00:01.074) 0:00:48.154 ******** 2026-03-28 05:05:32.536622 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:05:32.536634 | orchestrator | 2026-03-28 05:05:32.536645 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 05:05:32.536657 | orchestrator | Saturday 28 March 2026 05:04:45 +0000 (0:00:01.113) 0:00:49.267 ******** 2026-03-28 05:05:32.536668 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:05:32.536679 | orchestrator | 2026-03-28 05:05:32.536690 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 05:05:32.536702 | orchestrator | Saturday 28 March 2026 05:04:53 +0000 (0:00:08.214) 0:00:57.481 ******** 2026-03-28 05:05:32.536714 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:05:32.536725 | orchestrator | 2026-03-28 05:05:32.536736 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 05:05:32.536748 | orchestrator | Saturday 28 March 2026 05:05:01 +0000 (0:00:08.200) 0:01:05.682 ******** 2026-03-28 05:05:32.536760 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:05:32.536771 | orchestrator | 2026-03-28 05:05:32.536784 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 05:05:32.536795 | orchestrator | 2026-03-28 05:05:32.536807 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 05:05:32.536819 | orchestrator | Saturday 28 March 2026 05:05:11 +0000 (0:00:09.115) 0:01:14.797 ******** 2026-03-28 05:05:32.536831 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:05:32.536842 | orchestrator | 2026-03-28 05:05:32.536854 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 05:05:32.536866 | orchestrator | Saturday 28 March 2026 05:05:12 +0000 (0:00:01.020) 0:01:15.818 ******** 2026-03-28 05:05:32.536878 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:05:32.536890 | orchestrator | 2026-03-28 05:05:32.536901 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 05:05:32.536911 | orchestrator | Saturday 28 March 2026 05:05:19 +0000 (0:00:07.666) 0:01:23.484 ******** 2026-03-28 05:05:32.536933 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:06:18.460619 | orchestrator | 2026-03-28 05:06:18.460731 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 05:06:18.460745 | orchestrator | Saturday 28 March 2026 05:05:32 +0000 (0:00:12.834) 0:01:36.319 ******** 2026-03-28 05:06:18.460756 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:06:18.460767 | orchestrator | 2026-03-28 05:06:18.460778 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 05:06:18.460788 | orchestrator | 2026-03-28 05:06:18.460798 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 05:06:18.460808 | orchestrator | Saturday 28 March 2026 05:05:40 +0000 (0:00:08.448) 0:01:44.768 ******** 2026-03-28 05:06:18.460833 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:06:18.460845 | orchestrator | 2026-03-28 05:06:18.460855 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 05:06:18.460864 | orchestrator | Saturday 28 March 2026 05:05:42 +0000 (0:00:01.290) 0:01:46.058 ******** 2026-03-28 05:06:18.460874 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:06:18.460884 | orchestrator | 2026-03-28 05:06:18.460894 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 05:06:18.460903 | orchestrator | Saturday 28 March 2026 05:05:50 +0000 (0:00:07.875) 0:01:53.934 ******** 2026-03-28 05:06:18.460913 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:06:18.460923 | orchestrator | 2026-03-28 05:06:18.460933 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 05:06:18.460942 | orchestrator | Saturday 28 March 2026 05:06:03 +0000 (0:00:13.633) 0:02:07.568 ******** 2026-03-28 05:06:18.460952 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:06:18.460962 | orchestrator | 2026-03-28 05:06:18.460971 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-28 05:06:18.460981 | orchestrator | 2026-03-28 05:06:18.460991 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-28 05:06:18.461001 | orchestrator | Saturday 28 March 2026 05:06:13 +0000 (0:00:10.093) 0:02:17.662 ******** 2026-03-28 05:06:18.461010 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:06:18.461020 | orchestrator | 2026-03-28 05:06:18.461030 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 05:06:18.461039 | orchestrator | Saturday 28 March 2026 05:06:14 +0000 (0:00:00.581) 0:02:18.243 ******** 2026-03-28 05:06:18.461049 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:06:18.461059 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:06:18.461069 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:06:18.461079 | orchestrator | 2026-03-28 05:06:18.461089 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 05:06:18.461099 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 05:06:18.461109 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 05:06:18.461119 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 05:06:18.461129 | orchestrator | 2026-03-28 05:06:18.461139 | orchestrator | 2026-03-28 05:06:18.461149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 05:06:18.461162 | orchestrator | Saturday 28 March 2026 05:06:18 +0000 (0:00:03.586) 0:02:21.830 ******** 2026-03-28 05:06:18.461173 | orchestrator | =============================================================================== 2026-03-28 05:06:18.461185 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 34.67s 2026-03-28 05:06:18.461196 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 27.66s 2026-03-28 05:06:18.461232 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 23.76s 2026-03-28 05:06:18.461244 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.16s 2026-03-28 05:06:18.461256 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.59s 2026-03-28 05:06:18.461267 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.43s 2026-03-28 05:06:18.461279 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.81s 2026-03-28 05:06:18.461290 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.35s 2026-03-28 05:06:18.461302 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.32s 2026-03-28 05:06:18.461313 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.16s 2026-03-28 05:06:18.461325 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.97s 2026-03-28 05:06:18.461336 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.60s 2026-03-28 05:06:18.461347 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.55s 2026-03-28 05:06:18.461415 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.46s 2026-03-28 05:06:18.461427 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.43s 2026-03-28 05:06:18.461438 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.41s 2026-03-28 05:06:18.461450 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2026-03-28 05:06:18.461460 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.39s 2026-03-28 05:06:18.461469 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.37s 2026-03-28 05:06:18.461479 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.31s 2026-03-28 05:06:18.842261 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-28 05:06:20.997880 | orchestrator | 2026-03-28 05:06:20 | INFO  | Task 38e24a95-bbb6-4403-9a7b-5e05ca2aa062 (openvswitch) was prepared for execution. 2026-03-28 05:06:20.997976 | orchestrator | 2026-03-28 05:06:20 | INFO  | It takes a moment until task 38e24a95-bbb6-4403-9a7b-5e05ca2aa062 (openvswitch) has been started and output is visible here. 2026-03-28 05:06:48.633069 | orchestrator | 2026-03-28 05:06:48.633187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 05:06:48.633205 | orchestrator | 2026-03-28 05:06:48.633234 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 05:06:48.633246 | orchestrator | Saturday 28 March 2026 05:06:26 +0000 (0:00:01.460) 0:00:01.460 ******** 2026-03-28 05:06:48.633258 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:06:48.633271 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:06:48.633283 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:06:48.633294 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:06:48.633305 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:06:48.633319 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:06:48.633338 | orchestrator | 2026-03-28 05:06:48.633357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 05:06:48.633374 | orchestrator | Saturday 28 March 2026 05:06:29 +0000 (0:00:02.687) 0:00:04.147 ******** 2026-03-28 05:06:48.633475 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633496 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633515 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633533 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633551 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633569 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 05:06:48.633617 | orchestrator | 2026-03-28 05:06:48.633638 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-28 05:06:48.633660 | orchestrator | 2026-03-28 05:06:48.633681 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-28 05:06:48.633703 | orchestrator | Saturday 28 March 2026 05:06:31 +0000 (0:00:02.199) 0:00:06.347 ******** 2026-03-28 05:06:48.633725 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 05:06:48.633749 | orchestrator | 2026-03-28 05:06:48.633772 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 05:06:48.633792 | orchestrator | Saturday 28 March 2026 05:06:34 +0000 (0:00:02.952) 0:00:09.300 ******** 2026-03-28 05:06:48.633813 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-28 05:06:48.633835 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-28 05:06:48.633856 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-28 05:06:48.633878 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-28 05:06:48.633898 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-28 05:06:48.633921 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-28 05:06:48.633941 | orchestrator | 2026-03-28 05:06:48.633962 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 05:06:48.633983 | orchestrator | Saturday 28 March 2026 05:06:37 +0000 (0:00:02.619) 0:00:11.920 ******** 2026-03-28 05:06:48.634005 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-28 05:06:48.634116 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-28 05:06:48.634140 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-28 05:06:48.634160 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-28 05:06:48.634193 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-28 05:06:48.634212 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-28 05:06:48.634232 | orchestrator | 2026-03-28 05:06:48.634253 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 05:06:48.634274 | orchestrator | Saturday 28 March 2026 05:06:40 +0000 (0:00:02.837) 0:00:14.757 ******** 2026-03-28 05:06:48.634294 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-28 05:06:48.634315 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:06:48.634337 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-28 05:06:48.634358 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:06:48.634408 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-28 05:06:48.634430 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:06:48.634450 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-28 05:06:48.634470 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:06:48.634491 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-28 05:06:48.634511 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:06:48.634531 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-28 05:06:48.634588 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:06:48.634610 | orchestrator | 2026-03-28 05:06:48.634630 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-28 05:06:48.634650 | orchestrator | Saturday 28 March 2026 05:06:43 +0000 (0:00:03.098) 0:00:17.855 ******** 2026-03-28 05:06:48.634671 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:06:48.634691 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:06:48.634711 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:06:48.634732 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:06:48.634750 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:06:48.634769 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:06:48.634787 | orchestrator | 2026-03-28 05:06:48.634806 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-28 05:06:48.634842 | orchestrator | Saturday 28 March 2026 05:06:45 +0000 (0:00:02.429) 0:00:20.284 ******** 2026-03-28 05:06:48.634907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:48.634935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:48.634956 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:48.634975 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:48.634994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:48.635015 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:48.635071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.500820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.500926 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.500942 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.500955 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501012 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501035 | orchestrator | 2026-03-28 05:06:51.501057 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-28 05:06:51.501078 | orchestrator | Saturday 28 March 2026 05:06:48 +0000 (0:00:02.839) 0:00:23.124 ******** 2026-03-28 05:06:51.501113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501127 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501138 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501172 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:51.501209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257296 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257368 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257376 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257430 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257445 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257450 | orchestrator | 2026-03-28 05:06:57.257455 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-28 05:06:57.257460 | orchestrator | Saturday 28 March 2026 05:06:52 +0000 (0:00:03.994) 0:00:27.118 ******** 2026-03-28 05:06:57.257464 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:06:57.257469 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:06:57.257473 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:06:57.257477 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:06:57.257481 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:06:57.257485 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:06:57.257488 | orchestrator | 2026-03-28 05:06:57.257493 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-28 05:06:57.257506 | orchestrator | Saturday 28 March 2026 05:06:55 +0000 (0:00:02.658) 0:00:29.777 ******** 2026-03-28 05:06:57.257511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:06:57.257545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 05:07:01.289543 | orchestrator | 2026-03-28 05:07:01.289549 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-28 05:07:01.289555 | orchestrator | Saturday 28 March 2026 05:06:58 +0000 (0:00:03.386) 0:00:33.163 ******** 2026-03-28 05:07:01.289561 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:07:01.289567 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289579 | orchestrator | } 2026-03-28 05:07:01.289584 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:07:01.289589 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289594 | orchestrator | } 2026-03-28 05:07:01.289599 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:07:01.289604 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289609 | orchestrator | } 2026-03-28 05:07:01.289613 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 05:07:01.289618 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289623 | orchestrator | } 2026-03-28 05:07:01.289628 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 05:07:01.289633 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289637 | orchestrator | } 2026-03-28 05:07:01.289642 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 05:07:01.289647 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:07:01.289652 | orchestrator | } 2026-03-28 05:07:01.289658 | orchestrator | 2026-03-28 05:07:01.289663 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 05:07:01.289668 | orchestrator | Saturday 28 March 2026 05:07:00 +0000 (0:00:02.145) 0:00:35.309 ******** 2026-03-28 05:07:01.289673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:01.289682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:01.289688 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:07:01.289693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:01.289699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:01.289710 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:07:32.602903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:32.603017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:32.603039 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:07:32.603059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:32.603096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:32.603114 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:07:32.603130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:32.603198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:32.603218 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:07:32.603235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 05:07:32.603246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 05:07:32.603257 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:07:32.603267 | orchestrator | 2026-03-28 05:07:32.603278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603289 | orchestrator | Saturday 28 March 2026 05:07:03 +0000 (0:00:02.705) 0:00:38.014 ******** 2026-03-28 05:07:32.603299 | orchestrator | 2026-03-28 05:07:32.603309 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603323 | orchestrator | Saturday 28 March 2026 05:07:04 +0000 (0:00:00.577) 0:00:38.592 ******** 2026-03-28 05:07:32.603339 | orchestrator | 2026-03-28 05:07:32.603355 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603371 | orchestrator | Saturday 28 March 2026 05:07:04 +0000 (0:00:00.497) 0:00:39.090 ******** 2026-03-28 05:07:32.603386 | orchestrator | 2026-03-28 05:07:32.603402 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603418 | orchestrator | Saturday 28 March 2026 05:07:05 +0000 (0:00:00.518) 0:00:39.609 ******** 2026-03-28 05:07:32.603468 | orchestrator | 2026-03-28 05:07:32.603492 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603510 | orchestrator | Saturday 28 March 2026 05:07:05 +0000 (0:00:00.759) 0:00:40.369 ******** 2026-03-28 05:07:32.603528 | orchestrator | 2026-03-28 05:07:32.603546 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 05:07:32.603562 | orchestrator | Saturday 28 March 2026 05:07:06 +0000 (0:00:00.538) 0:00:40.908 ******** 2026-03-28 05:07:32.603578 | orchestrator | 2026-03-28 05:07:32.603595 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-28 05:07:32.603624 | orchestrator | Saturday 28 March 2026 05:07:07 +0000 (0:00:01.084) 0:00:41.992 ******** 2026-03-28 05:07:32.603642 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:07:32.603660 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:07:32.603680 | orchestrator | changed: [testbed-node-5] 2026-03-28 05:07:32.603697 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:07:32.603714 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:07:32.603730 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:07:32.603745 | orchestrator | 2026-03-28 05:07:32.603760 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-28 05:07:32.603777 | orchestrator | Saturday 28 March 2026 05:07:19 +0000 (0:00:11.692) 0:00:53.685 ******** 2026-03-28 05:07:32.603794 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:07:32.603811 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:07:32.603827 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:07:32.603843 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:07:32.603858 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:07:32.603875 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:07:32.603891 | orchestrator | 2026-03-28 05:07:32.603906 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 05:07:32.603922 | orchestrator | Saturday 28 March 2026 05:07:21 +0000 (0:00:02.293) 0:00:55.978 ******** 2026-03-28 05:07:32.603938 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:07:32.603954 | orchestrator | changed: [testbed-node-5] 2026-03-28 05:07:32.603970 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:07:32.603987 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:07:32.604004 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:07:32.604021 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:07:32.604037 | orchestrator | 2026-03-28 05:07:32.604053 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-28 05:07:32.604086 | orchestrator | Saturday 28 March 2026 05:07:32 +0000 (0:00:11.108) 0:01:07.087 ******** 2026-03-28 05:07:48.417182 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-28 05:07:48.417276 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-28 05:07:48.417286 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-28 05:07:48.417293 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-28 05:07:48.417300 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-28 05:07:48.417306 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-28 05:07:48.417313 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-28 05:07:48.417319 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-28 05:07:48.417326 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-28 05:07:48.417332 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-28 05:07:48.417339 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-28 05:07:48.417345 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-28 05:07:48.417352 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417358 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417380 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417387 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417394 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417400 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 05:07:48.417407 | orchestrator | 2026-03-28 05:07:48.417414 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-28 05:07:48.417421 | orchestrator | Saturday 28 March 2026 05:07:40 +0000 (0:00:07.532) 0:01:14.620 ******** 2026-03-28 05:07:48.417428 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-28 05:07:48.417435 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:07:48.417486 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-28 05:07:48.417493 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:07:48.417499 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-28 05:07:48.417517 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:07:48.417524 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-28 05:07:48.417531 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-28 05:07:48.417537 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-28 05:07:48.417543 | orchestrator | 2026-03-28 05:07:48.417550 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-28 05:07:48.417556 | orchestrator | Saturday 28 March 2026 05:07:43 +0000 (0:00:03.197) 0:01:17.817 ******** 2026-03-28 05:07:48.417562 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-28 05:07:48.417569 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:07:48.417575 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-28 05:07:48.417581 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:07:48.417588 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-28 05:07:48.417594 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:07:48.417600 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-28 05:07:48.417606 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-28 05:07:48.417613 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-28 05:07:48.417619 | orchestrator | 2026-03-28 05:07:48.417625 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 05:07:48.417632 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 05:07:48.417640 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 05:07:48.417647 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 05:07:48.417654 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 05:07:48.417672 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 05:07:48.417679 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 05:07:48.417685 | orchestrator | 2026-03-28 05:07:48.417691 | orchestrator | 2026-03-28 05:07:48.417698 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 05:07:48.417704 | orchestrator | Saturday 28 March 2026 05:07:47 +0000 (0:00:04.570) 0:01:22.387 ******** 2026-03-28 05:07:48.417716 | orchestrator | =============================================================================== 2026-03-28 05:07:48.417722 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.69s 2026-03-28 05:07:48.417729 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.11s 2026-03-28 05:07:48.417735 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.53s 2026-03-28 05:07:48.417741 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.57s 2026-03-28 05:07:48.417748 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.99s 2026-03-28 05:07:48.417754 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.98s 2026-03-28 05:07:48.417760 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.39s 2026-03-28 05:07:48.417767 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.20s 2026-03-28 05:07:48.417773 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.10s 2026-03-28 05:07:48.417779 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.95s 2026-03-28 05:07:48.417786 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.84s 2026-03-28 05:07:48.417792 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.84s 2026-03-28 05:07:48.417798 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.71s 2026-03-28 05:07:48.417804 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.69s 2026-03-28 05:07:48.417811 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.66s 2026-03-28 05:07:48.417817 | orchestrator | module-load : Load modules ---------------------------------------------- 2.62s 2026-03-28 05:07:48.417823 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.43s 2026-03-28 05:07:48.417830 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.29s 2026-03-28 05:07:48.417836 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.20s 2026-03-28 05:07:48.417842 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.15s 2026-03-28 05:07:48.780397 | orchestrator | + osism apply -a upgrade ovn 2026-03-28 05:07:50.987103 | orchestrator | 2026-03-28 05:07:50 | INFO  | Task e254bd64-bd62-4010-a7bd-245e46e1bab4 (ovn) was prepared for execution. 2026-03-28 05:07:50.987214 | orchestrator | 2026-03-28 05:07:50 | INFO  | It takes a moment until task e254bd64-bd62-4010-a7bd-245e46e1bab4 (ovn) has been started and output is visible here. 2026-03-28 05:08:14.447670 | orchestrator | 2026-03-28 05:08:14.447809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 05:08:14.447829 | orchestrator | 2026-03-28 05:08:14.447841 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 05:08:14.447853 | orchestrator | Saturday 28 March 2026 05:07:58 +0000 (0:00:03.053) 0:00:03.053 ******** 2026-03-28 05:08:14.447865 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:08:14.447877 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:08:14.447889 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:08:14.447900 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:08:14.447911 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:08:14.447922 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:08:14.447933 | orchestrator | 2026-03-28 05:08:14.447944 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 05:08:14.447955 | orchestrator | Saturday 28 March 2026 05:08:01 +0000 (0:00:03.456) 0:00:06.509 ******** 2026-03-28 05:08:14.447967 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-28 05:08:14.447979 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-28 05:08:14.447990 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-28 05:08:14.448001 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-28 05:08:14.448031 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-28 05:08:14.448043 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-28 05:08:14.448054 | orchestrator | 2026-03-28 05:08:14.448066 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-28 05:08:14.448077 | orchestrator | 2026-03-28 05:08:14.448089 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-28 05:08:14.448100 | orchestrator | Saturday 28 March 2026 05:08:04 +0000 (0:00:02.598) 0:00:09.107 ******** 2026-03-28 05:08:14.448112 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 05:08:14.448124 | orchestrator | 2026-03-28 05:08:14.448136 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-28 05:08:14.448147 | orchestrator | Saturday 28 March 2026 05:08:07 +0000 (0:00:03.239) 0:00:12.347 ******** 2026-03-28 05:08:14.448160 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448186 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448200 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448213 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448251 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448266 | orchestrator | 2026-03-28 05:08:14.448279 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-28 05:08:14.448293 | orchestrator | Saturday 28 March 2026 05:08:10 +0000 (0:00:02.680) 0:00:15.027 ******** 2026-03-28 05:08:14.448314 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448326 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448338 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448360 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448372 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448383 | orchestrator | 2026-03-28 05:08:14.448394 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-28 05:08:14.448406 | orchestrator | Saturday 28 March 2026 05:08:13 +0000 (0:00:02.983) 0:00:18.011 ******** 2026-03-28 05:08:14.448417 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:14.448458 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232169 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232252 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232258 | orchestrator | 2026-03-28 05:08:24.232264 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-28 05:08:24.232269 | orchestrator | Saturday 28 March 2026 05:08:16 +0000 (0:00:02.679) 0:00:20.690 ******** 2026-03-28 05:08:24.232274 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232279 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232284 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232304 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232353 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232359 | orchestrator | 2026-03-28 05:08:24.232363 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-28 05:08:24.232368 | orchestrator | Saturday 28 March 2026 05:08:19 +0000 (0:00:03.197) 0:00:23.888 ******** 2026-03-28 05:08:24.232374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:08:24.232405 | orchestrator | 2026-03-28 05:08:24.232410 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-28 05:08:24.232419 | orchestrator | Saturday 28 March 2026 05:08:21 +0000 (0:00:02.565) 0:00:26.453 ******** 2026-03-28 05:08:24.232424 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:08:24.232430 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232434 | orchestrator | } 2026-03-28 05:08:24.232439 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:08:24.232444 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232448 | orchestrator | } 2026-03-28 05:08:24.232452 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:08:24.232457 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232461 | orchestrator | } 2026-03-28 05:08:24.232465 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 05:08:24.232510 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232516 | orchestrator | } 2026-03-28 05:08:24.232520 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 05:08:24.232525 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232529 | orchestrator | } 2026-03-28 05:08:24.232550 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 05:08:24.232555 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:08:24.232560 | orchestrator | } 2026-03-28 05:08:24.232564 | orchestrator | 2026-03-28 05:08:24.232572 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 05:08:24.232577 | orchestrator | Saturday 28 March 2026 05:08:24 +0000 (0:00:02.141) 0:00:28.595 ******** 2026-03-28 05:08:24.232587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714343 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:08:53.714448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714465 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:08:53.714475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714483 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:08:53.714493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714553 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:08:53.714562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714596 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:08:53.714605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:08:53.714613 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:08:53.714620 | orchestrator | 2026-03-28 05:08:53.714629 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-28 05:08:53.714637 | orchestrator | Saturday 28 March 2026 05:08:26 +0000 (0:00:02.677) 0:00:31.272 ******** 2026-03-28 05:08:53.714644 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:08:53.714652 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:08:53.714659 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:08:53.714666 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:08:53.714673 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:08:53.714680 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:08:53.714686 | orchestrator | 2026-03-28 05:08:53.714694 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-28 05:08:53.714701 | orchestrator | Saturday 28 March 2026 05:08:30 +0000 (0:00:03.738) 0:00:35.011 ******** 2026-03-28 05:08:53.714709 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-28 05:08:53.714718 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-28 05:08:53.714725 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-28 05:08:53.714732 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-28 05:08:53.714739 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-28 05:08:53.714759 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-28 05:08:53.714767 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714774 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714782 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714789 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714796 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714818 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 05:08:53.714825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714850 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714868 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 05:08:53.714883 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714892 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714900 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714909 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714917 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714925 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 05:08:53.714934 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714942 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714950 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714958 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714967 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714975 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 05:08:53.714984 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.714992 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.715000 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.715009 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.715017 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.715025 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 05:08:53.715033 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 05:08:53.715042 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 05:08:53.715050 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 05:08:53.715058 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 05:08:53.715066 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 05:08:53.715075 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 05:08:53.715083 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-28 05:08:53.715101 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-28 05:08:53.715110 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-28 05:08:53.715118 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-28 05:08:53.715126 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-28 05:08:53.715139 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-28 05:11:43.306475 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 05:11:43.306592 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 05:11:43.306610 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 05:11:43.306623 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 05:11:43.306634 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 05:11:43.306715 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 05:11:43.306730 | orchestrator | 2026-03-28 05:11:43.306742 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306754 | orchestrator | Saturday 28 March 2026 05:08:50 +0000 (0:00:20.069) 0:00:55.081 ******** 2026-03-28 05:11:43.306765 | orchestrator | 2026-03-28 05:11:43.306777 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306788 | orchestrator | Saturday 28 March 2026 05:08:50 +0000 (0:00:00.428) 0:00:55.509 ******** 2026-03-28 05:11:43.306799 | orchestrator | 2026-03-28 05:11:43.306811 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306822 | orchestrator | Saturday 28 March 2026 05:08:51 +0000 (0:00:00.481) 0:00:55.990 ******** 2026-03-28 05:11:43.306833 | orchestrator | 2026-03-28 05:11:43.306844 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306855 | orchestrator | Saturday 28 March 2026 05:08:51 +0000 (0:00:00.448) 0:00:56.439 ******** 2026-03-28 05:11:43.306866 | orchestrator | 2026-03-28 05:11:43.306878 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306889 | orchestrator | Saturday 28 March 2026 05:08:52 +0000 (0:00:00.486) 0:00:56.926 ******** 2026-03-28 05:11:43.306900 | orchestrator | 2026-03-28 05:11:43.306912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 05:11:43.306923 | orchestrator | Saturday 28 March 2026 05:08:52 +0000 (0:00:00.450) 0:00:57.376 ******** 2026-03-28 05:11:43.306934 | orchestrator | 2026-03-28 05:11:43.306945 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-28 05:11:43.306956 | orchestrator | Saturday 28 March 2026 05:08:53 +0000 (0:00:00.805) 0:00:58.182 ******** 2026-03-28 05:11:43.306967 | orchestrator | 2026-03-28 05:11:43.306978 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-28 05:11:43.306990 | orchestrator | changed: [testbed-node-5] 2026-03-28 05:11:43.307003 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:11:43.307014 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:11:43.307027 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:11:43.307039 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:11:43.307052 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:11:43.307064 | orchestrator | 2026-03-28 05:11:43.307077 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-28 05:11:43.307090 | orchestrator | 2026-03-28 05:11:43.307102 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 05:11:43.307115 | orchestrator | Saturday 28 March 2026 05:11:05 +0000 (0:02:11.874) 0:03:10.057 ******** 2026-03-28 05:11:43.307136 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:11:43.307156 | orchestrator | 2026-03-28 05:11:43.307176 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 05:11:43.307196 | orchestrator | Saturday 28 March 2026 05:11:07 +0000 (0:00:02.051) 0:03:12.108 ******** 2026-03-28 05:11:43.307213 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 05:11:43.307265 | orchestrator | 2026-03-28 05:11:43.307282 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-28 05:11:43.307295 | orchestrator | Saturday 28 March 2026 05:11:09 +0000 (0:00:02.071) 0:03:14.180 ******** 2026-03-28 05:11:43.307308 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307321 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307334 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307346 | orchestrator | 2026-03-28 05:11:43.307359 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-28 05:11:43.307373 | orchestrator | Saturday 28 March 2026 05:11:11 +0000 (0:00:01.906) 0:03:16.086 ******** 2026-03-28 05:11:43.307385 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307397 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307408 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307419 | orchestrator | 2026-03-28 05:11:43.307430 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-28 05:11:43.307456 | orchestrator | Saturday 28 March 2026 05:11:12 +0000 (0:00:01.381) 0:03:17.468 ******** 2026-03-28 05:11:43.307467 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307478 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307489 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307499 | orchestrator | 2026-03-28 05:11:43.307510 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-28 05:11:43.307522 | orchestrator | Saturday 28 March 2026 05:11:14 +0000 (0:00:01.408) 0:03:18.877 ******** 2026-03-28 05:11:43.307532 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307543 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307554 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307565 | orchestrator | 2026-03-28 05:11:43.307576 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-28 05:11:43.307587 | orchestrator | Saturday 28 March 2026 05:11:16 +0000 (0:00:01.654) 0:03:20.531 ******** 2026-03-28 05:11:43.307598 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307625 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307637 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307679 | orchestrator | 2026-03-28 05:11:43.307696 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-28 05:11:43.307722 | orchestrator | Saturday 28 March 2026 05:11:17 +0000 (0:00:01.431) 0:03:21.963 ******** 2026-03-28 05:11:43.307745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:11:43.307760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:11:43.307777 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:11:43.307793 | orchestrator | 2026-03-28 05:11:43.307810 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-28 05:11:43.307827 | orchestrator | Saturday 28 March 2026 05:11:18 +0000 (0:00:01.377) 0:03:23.340 ******** 2026-03-28 05:11:43.307843 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307860 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307877 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307894 | orchestrator | 2026-03-28 05:11:43.307914 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-28 05:11:43.307933 | orchestrator | Saturday 28 March 2026 05:11:20 +0000 (0:00:01.785) 0:03:25.126 ******** 2026-03-28 05:11:43.307952 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.307965 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.307975 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.307986 | orchestrator | 2026-03-28 05:11:43.307997 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-28 05:11:43.308008 | orchestrator | Saturday 28 March 2026 05:11:22 +0000 (0:00:01.611) 0:03:26.737 ******** 2026-03-28 05:11:43.308019 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308030 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308041 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308052 | orchestrator | 2026-03-28 05:11:43.308064 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-28 05:11:43.308108 | orchestrator | Saturday 28 March 2026 05:11:24 +0000 (0:00:01.888) 0:03:28.626 ******** 2026-03-28 05:11:43.308131 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308149 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308166 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308183 | orchestrator | 2026-03-28 05:11:43.308202 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-28 05:11:43.308221 | orchestrator | Saturday 28 March 2026 05:11:25 +0000 (0:00:01.474) 0:03:30.100 ******** 2026-03-28 05:11:43.308241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:11:43.308259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:11:43.308279 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:11:43.308298 | orchestrator | 2026-03-28 05:11:43.308318 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-28 05:11:43.308330 | orchestrator | Saturday 28 March 2026 05:11:27 +0000 (0:00:01.578) 0:03:31.679 ******** 2026-03-28 05:11:43.308341 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:11:43.308352 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:11:43.308363 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:11:43.308374 | orchestrator | 2026-03-28 05:11:43.308385 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-28 05:11:43.308396 | orchestrator | Saturday 28 March 2026 05:11:28 +0000 (0:00:01.487) 0:03:33.166 ******** 2026-03-28 05:11:43.308407 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308418 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308428 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308439 | orchestrator | 2026-03-28 05:11:43.308450 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-28 05:11:43.308461 | orchestrator | Saturday 28 March 2026 05:11:30 +0000 (0:00:01.896) 0:03:35.063 ******** 2026-03-28 05:11:43.308472 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308482 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308493 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308504 | orchestrator | 2026-03-28 05:11:43.308515 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-28 05:11:43.308526 | orchestrator | Saturday 28 March 2026 05:11:32 +0000 (0:00:01.479) 0:03:36.542 ******** 2026-03-28 05:11:43.308536 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308547 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308558 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308569 | orchestrator | 2026-03-28 05:11:43.308580 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-28 05:11:43.308591 | orchestrator | Saturday 28 March 2026 05:11:34 +0000 (0:00:02.263) 0:03:38.806 ******** 2026-03-28 05:11:43.308602 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:11:43.308612 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:11:43.308623 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:11:43.308634 | orchestrator | 2026-03-28 05:11:43.308665 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-28 05:11:43.308676 | orchestrator | Saturday 28 March 2026 05:11:35 +0000 (0:00:01.505) 0:03:40.312 ******** 2026-03-28 05:11:43.308687 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:11:43.308698 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:11:43.308709 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:11:43.308720 | orchestrator | 2026-03-28 05:11:43.308731 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 05:11:43.308742 | orchestrator | Saturday 28 March 2026 05:11:37 +0000 (0:00:01.470) 0:03:41.782 ******** 2026-03-28 05:11:43.308753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:11:43.308763 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:11:43.308805 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:11:43.308817 | orchestrator | 2026-03-28 05:11:43.308828 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 05:11:43.308838 | orchestrator | Saturday 28 March 2026 05:11:39 +0000 (0:00:01.770) 0:03:43.553 ******** 2026-03-28 05:11:43.308875 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425477 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425502 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:11:49.425592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:11:49.425617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:11:49.425641 | orchestrator | 2026-03-28 05:11:49.425748 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 05:11:49.425771 | orchestrator | Saturday 28 March 2026 05:11:43 +0000 (0:00:04.256) 0:03:47.810 ******** 2026-03-28 05:11:49.425784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425815 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:11:49.425858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.162656 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.281551 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.281652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:04.281695 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.281707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:04.281770 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.281781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:04.281792 | orchestrator | 2026-03-28 05:12:04.281806 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-28 05:12:04.281818 | orchestrator | Saturday 28 March 2026 05:11:49 +0000 (0:00:06.125) 0:03:53.935 ******** 2026-03-28 05:12:04.281830 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-28 05:12:04.281840 | orchestrator | 2026-03-28 05:12:04.281851 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-28 05:12:04.281861 | orchestrator | Saturday 28 March 2026 05:11:51 +0000 (0:00:01.916) 0:03:55.851 ******** 2026-03-28 05:12:04.281871 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:12:04.281882 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:12:04.281927 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:12:04.281937 | orchestrator | 2026-03-28 05:12:04.281947 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-28 05:12:04.281957 | orchestrator | Saturday 28 March 2026 05:11:53 +0000 (0:00:01.784) 0:03:57.636 ******** 2026-03-28 05:12:04.281967 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:12:04.281977 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:12:04.281987 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:12:04.281996 | orchestrator | 2026-03-28 05:12:04.282006 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-28 05:12:04.282073 | orchestrator | Saturday 28 March 2026 05:11:55 +0000 (0:00:02.685) 0:04:00.321 ******** 2026-03-28 05:12:04.282084 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:12:04.282094 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:12:04.282105 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:12:04.282115 | orchestrator | 2026-03-28 05:12:04.282125 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-28 05:12:04.282135 | orchestrator | Saturday 28 March 2026 05:11:58 +0000 (0:00:02.853) 0:04:03.174 ******** 2026-03-28 05:12:04.282146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:04.282237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:08.911723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.911831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:08.911870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.911881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:12:08.911904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.911914 | orchestrator | 2026-03-28 05:12:08.911926 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 05:12:08.911936 | orchestrator | Saturday 28 March 2026 05:12:04 +0000 (0:00:05.482) 0:04:08.657 ******** 2026-03-28 05:12:08.911947 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:12:08.911956 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:12:08.911965 | orchestrator | } 2026-03-28 05:12:08.911974 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:12:08.911983 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:12:08.911991 | orchestrator | } 2026-03-28 05:12:08.912000 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:12:08.912009 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:12:08.912017 | orchestrator | } 2026-03-28 05:12:08.912026 | orchestrator | 2026-03-28 05:12:08.912035 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 05:12:08.912044 | orchestrator | Saturday 28 March 2026 05:12:05 +0000 (0:00:01.413) 0:04:10.070 ******** 2026-03-28 05:12:08.912054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 05:12:08.912175 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 05:13:43.733876 | orchestrator | 2026-03-28 05:13:43.734132 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-28 05:13:43.734165 | orchestrator | Saturday 28 March 2026 05:12:08 +0000 (0:00:03.344) 0:04:13.415 ******** 2026-03-28 05:13:43.734179 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-28 05:13:43.734191 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-28 05:13:43.734203 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-28 05:13:43.734214 | orchestrator | 2026-03-28 05:13:43.734226 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 05:13:43.734238 | orchestrator | Saturday 28 March 2026 05:12:11 +0000 (0:00:02.350) 0:04:15.765 ******** 2026-03-28 05:13:43.734249 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 05:13:43.734262 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:13:43.734273 | orchestrator | } 2026-03-28 05:13:43.734285 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 05:13:43.734296 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:13:43.734307 | orchestrator | } 2026-03-28 05:13:43.734319 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 05:13:43.734330 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 05:13:43.734341 | orchestrator | } 2026-03-28 05:13:43.734352 | orchestrator | 2026-03-28 05:13:43.734404 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 05:13:43.734418 | orchestrator | Saturday 28 March 2026 05:12:12 +0000 (0:00:01.373) 0:04:17.139 ******** 2026-03-28 05:13:43.734431 | orchestrator | 2026-03-28 05:13:43.734443 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 05:13:43.734456 | orchestrator | Saturday 28 March 2026 05:12:13 +0000 (0:00:00.444) 0:04:17.584 ******** 2026-03-28 05:13:43.734469 | orchestrator | 2026-03-28 05:13:43.734482 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 05:13:43.734494 | orchestrator | Saturday 28 March 2026 05:12:13 +0000 (0:00:00.450) 0:04:18.034 ******** 2026-03-28 05:13:43.734507 | orchestrator | 2026-03-28 05:13:43.734520 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 05:13:43.734533 | orchestrator | Saturday 28 March 2026 05:12:14 +0000 (0:00:01.130) 0:04:19.164 ******** 2026-03-28 05:13:43.734546 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:13:43.734558 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:13:43.734571 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:13:43.734584 | orchestrator | 2026-03-28 05:13:43.734597 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 05:13:43.734610 | orchestrator | Saturday 28 March 2026 05:12:31 +0000 (0:00:16.879) 0:04:36.044 ******** 2026-03-28 05:13:43.734622 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:13:43.734635 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:13:43.734648 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:13:43.734661 | orchestrator | 2026-03-28 05:13:43.734692 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-28 05:13:43.734706 | orchestrator | Saturday 28 March 2026 05:12:48 +0000 (0:00:17.186) 0:04:53.230 ******** 2026-03-28 05:13:43.734719 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-28 05:13:43.734732 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-28 05:13:43.734815 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-28 05:13:43.734826 | orchestrator | 2026-03-28 05:13:43.734837 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 05:13:43.734872 | orchestrator | Saturday 28 March 2026 05:13:04 +0000 (0:00:16.072) 0:05:09.303 ******** 2026-03-28 05:13:43.734884 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:13:43.734895 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:13:43.734906 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:13:43.734917 | orchestrator | 2026-03-28 05:13:43.734928 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 05:13:43.734939 | orchestrator | Saturday 28 March 2026 05:13:22 +0000 (0:00:17.994) 0:05:27.297 ******** 2026-03-28 05:13:43.734950 | orchestrator | Pausing for 5 seconds 2026-03-28 05:13:43.734961 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:13:43.734972 | orchestrator | 2026-03-28 05:13:43.734983 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 05:13:43.734994 | orchestrator | Saturday 28 March 2026 05:13:28 +0000 (0:00:06.165) 0:05:33.463 ******** 2026-03-28 05:13:43.735005 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:13:43.735017 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:13:43.735028 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:13:43.735039 | orchestrator | 2026-03-28 05:13:43.735050 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 05:13:43.735061 | orchestrator | Saturday 28 March 2026 05:13:30 +0000 (0:00:01.894) 0:05:35.358 ******** 2026-03-28 05:13:43.735072 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:13:43.735083 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:13:43.735094 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:13:43.735105 | orchestrator | 2026-03-28 05:13:43.735116 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 05:13:43.735126 | orchestrator | Saturday 28 March 2026 05:13:32 +0000 (0:00:01.666) 0:05:37.025 ******** 2026-03-28 05:13:43.735137 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:13:43.735148 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:13:43.735159 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:13:43.735170 | orchestrator | 2026-03-28 05:13:43.735181 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 05:13:43.735192 | orchestrator | Saturday 28 March 2026 05:13:34 +0000 (0:00:01.926) 0:05:38.951 ******** 2026-03-28 05:13:43.735203 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:13:43.735214 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:13:43.735225 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:13:43.735235 | orchestrator | 2026-03-28 05:13:43.735246 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 05:13:43.735257 | orchestrator | Saturday 28 March 2026 05:13:36 +0000 (0:00:01.815) 0:05:40.767 ******** 2026-03-28 05:13:43.735268 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:13:43.735279 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:13:43.735290 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:13:43.735301 | orchestrator | 2026-03-28 05:13:43.735312 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 05:13:43.735345 | orchestrator | Saturday 28 March 2026 05:13:38 +0000 (0:00:01.983) 0:05:42.751 ******** 2026-03-28 05:13:43.735357 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:13:43.735368 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:13:43.735379 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:13:43.735390 | orchestrator | 2026-03-28 05:13:43.735401 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-28 05:13:43.735412 | orchestrator | Saturday 28 March 2026 05:13:40 +0000 (0:00:01.844) 0:05:44.596 ******** 2026-03-28 05:13:43.735423 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-28 05:13:43.735434 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-28 05:13:43.735445 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-28 05:13:43.735455 | orchestrator | 2026-03-28 05:13:43.735466 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 05:13:43.735478 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 05:13:43.735499 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 05:13:43.735510 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 05:13:43.735522 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 05:13:43.735532 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 05:13:43.735543 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 05:13:43.735554 | orchestrator | 2026-03-28 05:13:43.735565 | orchestrator | 2026-03-28 05:13:43.735576 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 05:13:43.735587 | orchestrator | Saturday 28 March 2026 05:13:43 +0000 (0:00:03.181) 0:05:47.777 ******** 2026-03-28 05:13:43.735598 | orchestrator | =============================================================================== 2026-03-28 05:13:43.735609 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.87s 2026-03-28 05:13:43.735620 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.07s 2026-03-28 05:13:43.735631 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.99s 2026-03-28 05:13:43.735642 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 17.19s 2026-03-28 05:13:43.735653 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.88s 2026-03-28 05:13:43.735664 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.07s 2026-03-28 05:13:43.735675 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.17s 2026-03-28 05:13:43.735685 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.13s 2026-03-28 05:13:43.735800 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.48s 2026-03-28 05:13:43.735825 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.26s 2026-03-28 05:13:43.735836 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.74s 2026-03-28 05:13:43.735847 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.46s 2026-03-28 05:13:43.735858 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.34s 2026-03-28 05:13:43.735869 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.24s 2026-03-28 05:13:43.735880 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.20s 2026-03-28 05:13:43.735891 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.18s 2026-03-28 05:13:43.735901 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.10s 2026-03-28 05:13:43.735912 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.98s 2026-03-28 05:13:43.735923 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.85s 2026-03-28 05:13:43.735934 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.69s 2026-03-28 05:13:44.166338 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-28 05:13:44.166438 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 05:13:44.166454 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-28 05:13:44.172925 | orchestrator | + set -e 2026-03-28 05:13:44.172997 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 05:13:44.173020 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 05:13:44.173040 | orchestrator | ++ INTERACTIVE=false 2026-03-28 05:13:44.173060 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 05:13:44.173106 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 05:13:44.173126 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-28 05:13:46.404448 | orchestrator | 2026-03-28 05:13:46 | INFO  | Task f524db12-60f2-4c22-bdff-6c7eb9aa9c90 (ceph-rolling_update) was prepared for execution. 2026-03-28 05:13:46.404549 | orchestrator | 2026-03-28 05:13:46 | INFO  | It takes a moment until task f524db12-60f2-4c22-bdff-6c7eb9aa9c90 (ceph-rolling_update) has been started and output is visible here. 2026-03-28 05:15:12.195873 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 05:15:12.195994 | orchestrator | 2.16.14 2026-03-28 05:15:12.196011 | orchestrator | 2026-03-28 05:15:12.196024 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-28 05:15:12.196036 | orchestrator | 2026-03-28 05:15:12.196048 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-28 05:15:12.196060 | orchestrator | Saturday 28 March 2026 05:13:55 +0000 (0:00:02.071) 0:00:02.071 ******** 2026-03-28 05:15:12.196071 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-28 05:15:12.196082 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-28 05:15:12.196094 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-28 05:15:12.196105 | orchestrator | skipping: [localhost] 2026-03-28 05:15:12.196117 | orchestrator | 2026-03-28 05:15:12.196128 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-28 05:15:12.196139 | orchestrator | 2026-03-28 05:15:12.196150 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-28 05:15:12.196160 | orchestrator | Saturday 28 March 2026 05:13:57 +0000 (0:00:01.809) 0:00:03.880 ******** 2026-03-28 05:15:12.196171 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 05:15:12.196182 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196194 | orchestrator | } 2026-03-28 05:15:12.196205 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 05:15:12.196216 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196227 | orchestrator | } 2026-03-28 05:15:12.196238 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 05:15:12.196249 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196260 | orchestrator | } 2026-03-28 05:15:12.196271 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 05:15:12.196282 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196293 | orchestrator | } 2026-03-28 05:15:12.196303 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 05:15:12.196314 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196325 | orchestrator | } 2026-03-28 05:15:12.196338 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 05:15:12.196351 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196363 | orchestrator | } 2026-03-28 05:15:12.196376 | orchestrator | ok: [testbed-manager] => { 2026-03-28 05:15:12.196388 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-28 05:15:12.196400 | orchestrator | } 2026-03-28 05:15:12.196413 | orchestrator | 2026-03-28 05:15:12.196425 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-28 05:15:12.196438 | orchestrator | Saturday 28 March 2026 05:14:02 +0000 (0:00:05.367) 0:00:09.248 ******** 2026-03-28 05:15:12.196466 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:12.196479 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:12.196492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:12.196504 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:12.196517 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:12.196529 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:12.196541 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.196554 | orchestrator | 2026-03-28 05:15:12.196588 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-28 05:15:12.196600 | orchestrator | Saturday 28 March 2026 05:14:09 +0000 (0:00:06.321) 0:00:15.570 ******** 2026-03-28 05:15:12.196611 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:15:12.196621 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:15:12.196632 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:15:12.196643 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:15:12.196654 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:15:12.196665 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:15:12.196676 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:15:12.196687 | orchestrator | 2026-03-28 05:15:12.196697 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-28 05:15:12.196708 | orchestrator | Saturday 28 March 2026 05:14:40 +0000 (0:00:31.333) 0:00:46.904 ******** 2026-03-28 05:15:12.196719 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.196730 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.196741 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.196752 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.196762 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.196773 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.196784 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.196795 | orchestrator | 2026-03-28 05:15:12.196835 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:15:12.196846 | orchestrator | Saturday 28 March 2026 05:14:42 +0000 (0:00:02.257) 0:00:49.162 ******** 2026-03-28 05:15:12.196858 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-28 05:15:12.196870 | orchestrator | 2026-03-28 05:15:12.196882 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:15:12.196892 | orchestrator | Saturday 28 March 2026 05:14:45 +0000 (0:00:02.836) 0:00:51.999 ******** 2026-03-28 05:15:12.196903 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.196914 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.196925 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.196935 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.196946 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.196957 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.196967 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.196978 | orchestrator | 2026-03-28 05:15:12.197007 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:15:12.197019 | orchestrator | Saturday 28 March 2026 05:14:48 +0000 (0:00:02.773) 0:00:54.772 ******** 2026-03-28 05:15:12.197030 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197040 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197051 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197062 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197072 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197083 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197093 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197104 | orchestrator | 2026-03-28 05:15:12.197115 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:15:12.197126 | orchestrator | Saturday 28 March 2026 05:14:50 +0000 (0:00:02.020) 0:00:56.793 ******** 2026-03-28 05:15:12.197136 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197147 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197157 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197168 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197178 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197189 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197207 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197218 | orchestrator | 2026-03-28 05:15:12.197229 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:15:12.197240 | orchestrator | Saturday 28 March 2026 05:14:53 +0000 (0:00:02.687) 0:00:59.480 ******** 2026-03-28 05:15:12.197250 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197261 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197272 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197282 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197293 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197303 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197314 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197324 | orchestrator | 2026-03-28 05:15:12.197335 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:15:12.197346 | orchestrator | Saturday 28 March 2026 05:14:55 +0000 (0:00:02.017) 0:01:01.498 ******** 2026-03-28 05:15:12.197356 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197367 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197377 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197388 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197398 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197409 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197419 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197430 | orchestrator | 2026-03-28 05:15:12.197440 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:15:12.197451 | orchestrator | Saturday 28 March 2026 05:14:57 +0000 (0:00:02.315) 0:01:03.813 ******** 2026-03-28 05:15:12.197462 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197472 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197483 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197493 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197504 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197515 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197525 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197536 | orchestrator | 2026-03-28 05:15:12.197553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:15:12.197564 | orchestrator | Saturday 28 March 2026 05:14:59 +0000 (0:00:01.991) 0:01:05.804 ******** 2026-03-28 05:15:12.197574 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:12.197585 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:12.197596 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:12.197607 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:12.197618 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:12.197629 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:12.197639 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:12.197650 | orchestrator | 2026-03-28 05:15:12.197660 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:15:12.197671 | orchestrator | Saturday 28 March 2026 05:15:01 +0000 (0:00:02.158) 0:01:07.962 ******** 2026-03-28 05:15:12.197682 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197693 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197703 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197714 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197724 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197735 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197745 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197756 | orchestrator | 2026-03-28 05:15:12.197766 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:15:12.197777 | orchestrator | Saturday 28 March 2026 05:15:03 +0000 (0:00:02.199) 0:01:10.162 ******** 2026-03-28 05:15:12.197788 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:15:12.197835 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:15:12.197856 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:15:12.197886 | orchestrator | 2026-03-28 05:15:12.197901 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:15:12.197912 | orchestrator | Saturday 28 March 2026 05:15:05 +0000 (0:00:01.760) 0:01:11.923 ******** 2026-03-28 05:15:12.197923 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:12.197934 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:12.197945 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:12.197955 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:12.197966 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:12.197977 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:12.197988 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:12.197998 | orchestrator | 2026-03-28 05:15:12.198009 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:15:12.198079 | orchestrator | Saturday 28 March 2026 05:15:07 +0000 (0:00:02.069) 0:01:13.993 ******** 2026-03-28 05:15:12.198093 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:15:12.198104 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:15:12.198115 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:15:12.198126 | orchestrator | 2026-03-28 05:15:12.198136 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:15:12.198147 | orchestrator | Saturday 28 March 2026 05:15:10 +0000 (0:00:03.194) 0:01:17.188 ******** 2026-03-28 05:15:12.198167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:15:35.885317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:15:35.885431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:15:35.885444 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.885456 | orchestrator | 2026-03-28 05:15:35.885468 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:15:35.885479 | orchestrator | Saturday 28 March 2026 05:15:12 +0000 (0:00:01.421) 0:01:18.609 ******** 2026-03-28 05:15:35.885492 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885505 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885525 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.885535 | orchestrator | 2026-03-28 05:15:35.885545 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:15:35.885555 | orchestrator | Saturday 28 March 2026 05:15:14 +0000 (0:00:02.081) 0:01:20.691 ******** 2026-03-28 05:15:35.885567 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:35.885638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.885648 | orchestrator | 2026-03-28 05:15:35.885658 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:15:35.885668 | orchestrator | Saturday 28 March 2026 05:15:15 +0000 (0:00:01.245) 0:01:21.936 ******** 2026-03-28 05:15:35.885680 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a580dbf75b8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:15:08.212045', 'end': '2026-03-28 05:15:08.251959', 'delta': '0:00:00.039914', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a580dbf75b8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:15:35.885713 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '63c01d28d51e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:15:08.971955', 'end': '2026-03-28 05:15:09.017257', 'delta': '0:00:00.045302', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63c01d28d51e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:15:35.885724 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '99ef085e2de2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:15:09.507946', 'end': '2026-03-28 05:15:09.563794', 'delta': '0:00:00.055848', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['99ef085e2de2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:15:35.885735 | orchestrator | 2026-03-28 05:15:35.885745 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:15:35.885755 | orchestrator | Saturday 28 March 2026 05:15:16 +0000 (0:00:01.253) 0:01:23.190 ******** 2026-03-28 05:15:35.885765 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:35.885776 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:35.885786 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:35.885795 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:35.885805 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:35.885872 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:35.885885 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:35.885896 | orchestrator | 2026-03-28 05:15:35.885908 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:15:35.885919 | orchestrator | Saturday 28 March 2026 05:15:19 +0000 (0:00:02.273) 0:01:25.463 ******** 2026-03-28 05:15:35.885939 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.885951 | orchestrator | 2026-03-28 05:15:35.885963 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:15:35.885973 | orchestrator | Saturday 28 March 2026 05:15:20 +0000 (0:00:01.282) 0:01:26.746 ******** 2026-03-28 05:15:35.885983 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:35.885993 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:35.886003 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:35.886012 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:35.886079 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:35.886090 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:35.886099 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:35.886109 | orchestrator | 2026-03-28 05:15:35.886125 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:15:35.886135 | orchestrator | Saturday 28 March 2026 05:15:22 +0000 (0:00:02.269) 0:01:29.015 ******** 2026-03-28 05:15:35.886144 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:35.886154 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886164 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886174 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886193 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 05:15:35.886212 | orchestrator | 2026-03-28 05:15:35.886222 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:15:35.886232 | orchestrator | Saturday 28 March 2026 05:15:26 +0000 (0:00:03.516) 0:01:32.532 ******** 2026-03-28 05:15:35.886242 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:15:35.886251 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:15:35.886261 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:15:35.886270 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:15:35.886280 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:15:35.886289 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:15:35.886299 | orchestrator | ok: [testbed-manager] 2026-03-28 05:15:35.886309 | orchestrator | 2026-03-28 05:15:35.886319 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:15:35.886329 | orchestrator | Saturday 28 March 2026 05:15:28 +0000 (0:00:02.371) 0:01:34.903 ******** 2026-03-28 05:15:35.886338 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.886348 | orchestrator | 2026-03-28 05:15:35.886358 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:15:35.886367 | orchestrator | Saturday 28 March 2026 05:15:29 +0000 (0:00:01.283) 0:01:36.187 ******** 2026-03-28 05:15:35.886377 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.886387 | orchestrator | 2026-03-28 05:15:35.886397 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:15:35.886406 | orchestrator | Saturday 28 March 2026 05:15:31 +0000 (0:00:01.316) 0:01:37.504 ******** 2026-03-28 05:15:35.886416 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.886426 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:35.886435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:35.886445 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:35.886455 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:35.886464 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:35.886474 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:35.886484 | orchestrator | 2026-03-28 05:15:35.886493 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:15:35.886503 | orchestrator | Saturday 28 March 2026 05:15:33 +0000 (0:00:02.662) 0:01:40.166 ******** 2026-03-28 05:15:35.886513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:35.886523 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:35.886532 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:35.886549 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:35.886559 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:35.886569 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:35.886586 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846313 | orchestrator | 2026-03-28 05:15:47.846436 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:15:47.846459 | orchestrator | Saturday 28 March 2026 05:15:35 +0000 (0:00:02.123) 0:01:42.289 ******** 2026-03-28 05:15:47.846473 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:47.846483 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:47.846491 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:47.846499 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:47.846508 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:47.846516 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:47.846524 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846532 | orchestrator | 2026-03-28 05:15:47.846541 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:15:47.846549 | orchestrator | Saturday 28 March 2026 05:15:38 +0000 (0:00:02.413) 0:01:44.703 ******** 2026-03-28 05:15:47.846557 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:47.846565 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:47.846573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:47.846581 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:47.846589 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:47.846597 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:47.846605 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846613 | orchestrator | 2026-03-28 05:15:47.846621 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:15:47.846629 | orchestrator | Saturday 28 March 2026 05:15:40 +0000 (0:00:02.173) 0:01:46.877 ******** 2026-03-28 05:15:47.846638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:47.846646 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:47.846653 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:47.846661 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:47.846669 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:47.846677 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:47.846685 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846693 | orchestrator | 2026-03-28 05:15:47.846701 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:15:47.846709 | orchestrator | Saturday 28 March 2026 05:15:42 +0000 (0:00:02.477) 0:01:49.354 ******** 2026-03-28 05:15:47.846718 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:47.846726 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:47.846734 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:47.846742 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:47.846750 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:47.846758 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:47.846766 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846774 | orchestrator | 2026-03-28 05:15:47.846782 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:15:47.846806 | orchestrator | Saturday 28 March 2026 05:15:45 +0000 (0:00:02.286) 0:01:51.642 ******** 2026-03-28 05:15:47.846815 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:47.846852 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:47.846863 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:47.846872 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:47.846882 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:47.846891 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:47.846900 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:47.846909 | orchestrator | 2026-03-28 05:15:47.846918 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:15:47.846928 | orchestrator | Saturday 28 March 2026 05:15:47 +0000 (0:00:02.320) 0:01:53.962 ******** 2026-03-28 05:15:47.846959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.846972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.846982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.847010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:47.847023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.847032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.847041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.847060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:47.847078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:47.847094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:48.147630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.147706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:48.147758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.147793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:48.147813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.526621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526646 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:48.526676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}})  2026-03-28 05:15:48.526717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.526730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}})  2026-03-28 05:15:48.526743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.526766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:48.526778 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:48.526797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}})  2026-03-28 05:15:48.533707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}})  2026-03-28 05:15:48.533720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.533784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}})  2026-03-28 05:15:48.533820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.533868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.533900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}})  2026-03-28 05:15:48.771705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:48.771750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}})  2026-03-28 05:15:48.771903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}})  2026-03-28 05:15:48.771925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.771941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.771956 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:48.771978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}})  2026-03-28 05:15:48.913786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:48.913799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}})  2026-03-28 05:15:48.913881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:48.913943 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:48.913962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:48.913988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:48.914000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}})  2026-03-28 05:15:48.914013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}})  2026-03-28 05:15:48.914098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.201796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:50.201951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.201969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202073 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:15:50.202088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202117 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202129 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-43-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:15:50.202160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202172 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.202205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c77014e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:15:50.420155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.420253 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:15:50.420269 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:15:50.420285 | orchestrator | 2026-03-28 05:15:50.420305 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:15:50.420324 | orchestrator | Saturday 28 March 2026 05:15:50 +0000 (0:00:02.651) 0:01:56.614 ******** 2026-03-28 05:15:50.420345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420366 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420413 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420436 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420473 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420494 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420506 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420520 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.420555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627482 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627568 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:15:50.627578 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627614 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627622 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627630 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627665 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627673 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627694 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.627709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915482 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:15:50.915585 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915625 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915637 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915649 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915660 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915682 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915709 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915729 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915742 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915757 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:50.915768 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:15:50.915786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106216 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.106410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.212238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.299569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415513 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415636 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:15:51.415650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.415661 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:15:51.415681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492779 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492793 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492877 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492917 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:15:51.492946 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-43-01-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.287748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288010 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288040 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288067 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288079 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:00.288161 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c77014e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part16', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part14', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part15', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part1', 'scsi-SQEMU_QEMU_HARDDISK_c77014e9-a354-44fa-b62b-eaaba3b9788d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288198 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288212 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:16:00.288224 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:00.288235 | orchestrator | 2026-03-28 05:16:00.288248 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:16:00.288260 | orchestrator | Saturday 28 March 2026 05:15:52 +0000 (0:00:02.526) 0:01:59.140 ******** 2026-03-28 05:16:00.288274 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:16:00.288288 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:16:00.288301 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:16:00.288314 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:16:00.288327 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:16:00.288340 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:16:00.288353 | orchestrator | ok: [testbed-manager] 2026-03-28 05:16:00.288366 | orchestrator | 2026-03-28 05:16:00.288380 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:16:00.288393 | orchestrator | Saturday 28 March 2026 05:15:55 +0000 (0:00:02.718) 0:02:01.859 ******** 2026-03-28 05:16:00.288414 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:16:00.288425 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:16:00.288436 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:16:00.288447 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:16:00.288458 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:16:00.288469 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:16:00.288480 | orchestrator | ok: [testbed-manager] 2026-03-28 05:16:00.288491 | orchestrator | 2026-03-28 05:16:00.288502 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:16:00.288513 | orchestrator | Saturday 28 March 2026 05:15:57 +0000 (0:00:02.092) 0:02:03.951 ******** 2026-03-28 05:16:00.288524 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:16:00.288535 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:16:00.288546 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:16:00.288556 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:16:00.288567 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:16:00.288578 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:00.288589 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:16:00.288600 | orchestrator | 2026-03-28 05:16:00.288611 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:16:00.288629 | orchestrator | Saturday 28 March 2026 05:16:00 +0000 (0:00:02.754) 0:02:06.705 ******** 2026-03-28 05:16:32.469326 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:16:32.469446 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:16:32.469461 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:16:32.469473 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.469484 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.469495 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.469507 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:32.469518 | orchestrator | 2026-03-28 05:16:32.469530 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:16:32.469543 | orchestrator | Saturday 28 March 2026 05:16:02 +0000 (0:00:02.092) 0:02:08.798 ******** 2026-03-28 05:16:32.469554 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:16:32.469565 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:16:32.469576 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:16:32.469587 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.469598 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.469609 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.469620 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-28 05:16:32.469632 | orchestrator | 2026-03-28 05:16:32.469643 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:16:32.469654 | orchestrator | Saturday 28 March 2026 05:16:05 +0000 (0:00:02.964) 0:02:11.763 ******** 2026-03-28 05:16:32.469683 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:16:32.469695 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:16:32.469706 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:16:32.469717 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.469728 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.469739 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.469750 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:32.469762 | orchestrator | 2026-03-28 05:16:32.469773 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:16:32.469784 | orchestrator | Saturday 28 March 2026 05:16:07 +0000 (0:00:01.990) 0:02:13.753 ******** 2026-03-28 05:16:32.469796 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:16:32.469807 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 05:16:32.469819 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 05:16:32.469830 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 05:16:32.469841 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 05:16:32.469882 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 05:16:32.469897 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:16:32.469934 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 05:16:32.469948 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 05:16:32.469960 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 05:16:32.469971 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 05:16:32.469982 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 05:16:32.469993 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:16:32.470004 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 05:16:32.470015 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 05:16:32.470083 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 05:16:32.470094 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 05:16:32.470105 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 05:16:32.470116 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 05:16:32.470127 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 05:16:32.470138 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 05:16:32.470148 | orchestrator | 2026-03-28 05:16:32.470160 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:16:32.470171 | orchestrator | Saturday 28 March 2026 05:16:11 +0000 (0:00:03.724) 0:02:17.477 ******** 2026-03-28 05:16:32.470182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:16:32.470193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:16:32.470204 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:16:32.470215 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:16:32.470226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:16:32.470237 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:16:32.470248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:16:32.470259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:16:32.470270 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:16:32.470281 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:16:32.470292 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:16:32.470303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:16:32.470313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 05:16:32.470324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 05:16:32.470335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 05:16:32.470346 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.470357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 05:16:32.470368 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 05:16:32.470378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 05:16:32.470389 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.470400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 05:16:32.470411 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 05:16:32.470422 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 05:16:32.470433 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.470463 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 05:16:32.470475 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 05:16:32.470485 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 05:16:32.470497 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:32.470507 | orchestrator | 2026-03-28 05:16:32.470519 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:16:32.470530 | orchestrator | Saturday 28 March 2026 05:16:13 +0000 (0:00:02.427) 0:02:19.904 ******** 2026-03-28 05:16:32.470550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:16:32.470561 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:16:32.470572 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:16:32.470583 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:16:32.470595 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 05:16:32.470606 | orchestrator | 2026-03-28 05:16:32.470618 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:16:32.470630 | orchestrator | Saturday 28 March 2026 05:16:15 +0000 (0:00:02.253) 0:02:22.158 ******** 2026-03-28 05:16:32.470647 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.470659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.470670 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.470681 | orchestrator | 2026-03-28 05:16:32.470700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:16:32.470718 | orchestrator | Saturday 28 March 2026 05:16:17 +0000 (0:00:01.733) 0:02:23.891 ******** 2026-03-28 05:16:32.470736 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.470754 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.470772 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.470790 | orchestrator | 2026-03-28 05:16:32.470809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:16:32.470827 | orchestrator | Saturday 28 March 2026 05:16:18 +0000 (0:00:01.416) 0:02:25.308 ******** 2026-03-28 05:16:32.470846 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.470900 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:16:32.470920 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:16:32.470939 | orchestrator | 2026-03-28 05:16:32.470958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:16:32.470976 | orchestrator | Saturday 28 March 2026 05:16:20 +0000 (0:00:01.470) 0:02:26.778 ******** 2026-03-28 05:16:32.470995 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:16:32.471015 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:16:32.471033 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:16:32.471050 | orchestrator | 2026-03-28 05:16:32.471062 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:16:32.471073 | orchestrator | Saturday 28 March 2026 05:16:21 +0000 (0:00:01.445) 0:02:28.224 ******** 2026-03-28 05:16:32.471084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:16:32.471095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:16:32.471106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:16:32.471116 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.471127 | orchestrator | 2026-03-28 05:16:32.471138 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:16:32.471149 | orchestrator | Saturday 28 March 2026 05:16:23 +0000 (0:00:01.723) 0:02:29.948 ******** 2026-03-28 05:16:32.471160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:16:32.471171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:16:32.471182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:16:32.471193 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.471204 | orchestrator | 2026-03-28 05:16:32.471215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:16:32.471226 | orchestrator | Saturday 28 March 2026 05:16:25 +0000 (0:00:01.770) 0:02:31.718 ******** 2026-03-28 05:16:32.471237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:16:32.471248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:16:32.471259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:16:32.471270 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:16:32.471290 | orchestrator | 2026-03-28 05:16:32.471302 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:16:32.471313 | orchestrator | Saturday 28 March 2026 05:16:27 +0000 (0:00:01.877) 0:02:33.595 ******** 2026-03-28 05:16:32.471324 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:16:32.471335 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:16:32.471346 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:16:32.471357 | orchestrator | 2026-03-28 05:16:32.471367 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:16:32.471378 | orchestrator | Saturday 28 March 2026 05:16:28 +0000 (0:00:01.438) 0:02:35.034 ******** 2026-03-28 05:16:32.471389 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 05:16:32.471400 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 05:16:32.471411 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 05:16:32.471422 | orchestrator | 2026-03-28 05:16:32.471433 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:16:32.471444 | orchestrator | Saturday 28 March 2026 05:16:30 +0000 (0:00:01.668) 0:02:36.703 ******** 2026-03-28 05:16:32.471455 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:16:32.471466 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:16:32.471478 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:16:32.471489 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:16:32.471510 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:17:23.976046 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:17:23.976171 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:17:23.976188 | orchestrator | 2026-03-28 05:17:23.976201 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:17:23.976214 | orchestrator | Saturday 28 March 2026 05:16:32 +0000 (0:00:02.177) 0:02:38.881 ******** 2026-03-28 05:17:23.976226 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:17:23.976237 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:17:23.976249 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:17:23.976260 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:17:23.976271 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:17:23.976282 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:17:23.976310 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:17:23.976323 | orchestrator | 2026-03-28 05:17:23.976335 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-28 05:17:23.976346 | orchestrator | Saturday 28 March 2026 05:16:35 +0000 (0:00:03.321) 0:02:42.202 ******** 2026-03-28 05:17:23.976358 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:17:23.976369 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:17:23.976380 | orchestrator | changed: [testbed-node-5] 2026-03-28 05:17:23.976391 | orchestrator | changed: [testbed-manager] 2026-03-28 05:17:23.976402 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:17:23.976413 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:17:23.976424 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:17:23.976435 | orchestrator | 2026-03-28 05:17:23.976446 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-28 05:17:23.976458 | orchestrator | Saturday 28 March 2026 05:16:46 +0000 (0:00:11.154) 0:02:53.357 ******** 2026-03-28 05:17:23.976469 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.976480 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.976516 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.976529 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.976542 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.976555 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.976568 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.976580 | orchestrator | 2026-03-28 05:17:23.976593 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-28 05:17:23.976606 | orchestrator | Saturday 28 March 2026 05:16:49 +0000 (0:00:02.300) 0:02:55.658 ******** 2026-03-28 05:17:23.976618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.976630 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.976643 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.976655 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.976667 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.976680 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.976692 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.976705 | orchestrator | 2026-03-28 05:17:23.976718 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-28 05:17:23.976731 | orchestrator | Saturday 28 March 2026 05:16:51 +0000 (0:00:02.003) 0:02:57.662 ******** 2026-03-28 05:17:23.976744 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.976757 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:17:23.976770 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:17:23.976783 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:17:23.976795 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:17:23.976808 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:17:23.976820 | orchestrator | changed: [testbed-node-5] 2026-03-28 05:17:23.976833 | orchestrator | 2026-03-28 05:17:23.976846 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-28 05:17:23.976858 | orchestrator | Saturday 28 March 2026 05:16:54 +0000 (0:00:03.213) 0:03:00.875 ******** 2026-03-28 05:17:23.976871 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-28 05:17:23.976883 | orchestrator | 2026-03-28 05:17:23.976916 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-28 05:17:23.976928 | orchestrator | Saturday 28 March 2026 05:16:57 +0000 (0:00:03.146) 0:03:04.022 ******** 2026-03-28 05:17:23.976939 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.976950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.976961 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.976972 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.976983 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.976994 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977005 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977016 | orchestrator | 2026-03-28 05:17:23.977027 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-28 05:17:23.977038 | orchestrator | Saturday 28 March 2026 05:16:59 +0000 (0:00:01.980) 0:03:06.002 ******** 2026-03-28 05:17:23.977049 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977060 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977071 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977082 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977092 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977103 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977114 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977125 | orchestrator | 2026-03-28 05:17:23.977136 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-28 05:17:23.977147 | orchestrator | Saturday 28 March 2026 05:17:01 +0000 (0:00:02.322) 0:03:08.325 ******** 2026-03-28 05:17:23.977158 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977187 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977207 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977218 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977229 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977240 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977251 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977262 | orchestrator | 2026-03-28 05:17:23.977273 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-28 05:17:23.977284 | orchestrator | Saturday 28 March 2026 05:17:04 +0000 (0:00:02.138) 0:03:10.464 ******** 2026-03-28 05:17:23.977295 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977306 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977317 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977328 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977339 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977349 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977360 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977371 | orchestrator | 2026-03-28 05:17:23.977382 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-28 05:17:23.977393 | orchestrator | Saturday 28 March 2026 05:17:06 +0000 (0:00:02.381) 0:03:12.846 ******** 2026-03-28 05:17:23.977404 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977414 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977431 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977442 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977453 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977464 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977475 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977486 | orchestrator | 2026-03-28 05:17:23.977497 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-28 05:17:23.977508 | orchestrator | Saturday 28 March 2026 05:17:08 +0000 (0:00:02.142) 0:03:14.988 ******** 2026-03-28 05:17:23.977519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977530 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977540 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977551 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977562 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977573 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977583 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977594 | orchestrator | 2026-03-28 05:17:23.977605 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-28 05:17:23.977616 | orchestrator | Saturday 28 March 2026 05:17:11 +0000 (0:00:02.463) 0:03:17.452 ******** 2026-03-28 05:17:23.977627 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977638 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977649 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977671 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977682 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977692 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977703 | orchestrator | 2026-03-28 05:17:23.977714 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-28 05:17:23.977725 | orchestrator | Saturday 28 March 2026 05:17:13 +0000 (0:00:02.086) 0:03:19.539 ******** 2026-03-28 05:17:23.977736 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977747 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977758 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977780 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977791 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977802 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977813 | orchestrator | 2026-03-28 05:17:23.977824 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-28 05:17:23.977843 | orchestrator | Saturday 28 March 2026 05:17:15 +0000 (0:00:02.479) 0:03:22.018 ******** 2026-03-28 05:17:23.977854 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977865 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.977876 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.977886 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.977916 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.977927 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.977938 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.977949 | orchestrator | 2026-03-28 05:17:23.977960 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-28 05:17:23.977971 | orchestrator | Saturday 28 March 2026 05:17:17 +0000 (0:00:02.223) 0:03:24.241 ******** 2026-03-28 05:17:23.977982 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.977993 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.978004 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.978077 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.978092 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.978103 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.978113 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.978124 | orchestrator | 2026-03-28 05:17:23.978136 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-28 05:17:23.978147 | orchestrator | Saturday 28 March 2026 05:17:19 +0000 (0:00:01.973) 0:03:26.214 ******** 2026-03-28 05:17:23.978158 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.978169 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.978180 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.978191 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.978201 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.978212 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.978223 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.978234 | orchestrator | 2026-03-28 05:17:23.978245 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-28 05:17:23.978256 | orchestrator | Saturday 28 March 2026 05:17:21 +0000 (0:00:02.197) 0:03:28.412 ******** 2026-03-28 05:17:23.979153 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:23.979168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:23.979177 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:23.979187 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:23.979196 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:23.979206 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:23.979215 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:23.979224 | orchestrator | 2026-03-28 05:17:23.979243 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-28 05:17:46.267040 | orchestrator | Saturday 28 March 2026 05:17:23 +0000 (0:00:01.980) 0:03:30.393 ******** 2026-03-28 05:17:46.267160 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267176 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.267188 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.267201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 05:17:46.267213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 05:17:46.267224 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.267236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 05:17:46.267263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 05:17:46.267275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 05:17:46.267307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 05:17:46.267319 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.267330 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.267341 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.267352 | orchestrator | 2026-03-28 05:17:46.267363 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-28 05:17:46.267374 | orchestrator | Saturday 28 March 2026 05:17:26 +0000 (0:00:02.237) 0:03:32.631 ******** 2026-03-28 05:17:46.267385 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267396 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.267407 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.267418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.267428 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.267439 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.267450 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.267460 | orchestrator | 2026-03-28 05:17:46.267471 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-28 05:17:46.267483 | orchestrator | Saturday 28 March 2026 05:17:28 +0000 (0:00:01.952) 0:03:34.583 ******** 2026-03-28 05:17:46.267494 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.267515 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.267526 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.267537 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.267550 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.267563 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.267577 | orchestrator | 2026-03-28 05:17:46.267598 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-28 05:17:46.267618 | orchestrator | Saturday 28 March 2026 05:17:30 +0000 (0:00:02.435) 0:03:37.019 ******** 2026-03-28 05:17:46.267637 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267657 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.267676 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.267695 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.267713 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.267734 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.267753 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.267773 | orchestrator | 2026-03-28 05:17:46.267795 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-28 05:17:46.267815 | orchestrator | Saturday 28 March 2026 05:17:32 +0000 (0:00:01.980) 0:03:39.000 ******** 2026-03-28 05:17:46.267835 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267847 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.267860 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.267873 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.267885 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.267897 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.267940 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.267952 | orchestrator | 2026-03-28 05:17:46.267964 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-28 05:17:46.267974 | orchestrator | Saturday 28 March 2026 05:17:34 +0000 (0:00:01.899) 0:03:40.900 ******** 2026-03-28 05:17:46.267985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.267996 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.268006 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.268017 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.268028 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.268038 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.268049 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.268073 | orchestrator | 2026-03-28 05:17:46.268085 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-28 05:17:46.268095 | orchestrator | Saturday 28 March 2026 05:17:36 +0000 (0:00:02.288) 0:03:43.188 ******** 2026-03-28 05:17:46.268106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.268117 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.268128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.268138 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.268149 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.268160 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.268170 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.268181 | orchestrator | 2026-03-28 05:17:46.268192 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-28 05:17:46.268203 | orchestrator | Saturday 28 March 2026 05:17:39 +0000 (0:00:02.300) 0:03:45.489 ******** 2026-03-28 05:17:46.268234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:17:46.268246 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:17:46.268256 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:17:46.268267 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:17:46.268278 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 05:17:46.268290 | orchestrator | 2026-03-28 05:17:46.268301 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-28 05:17:46.268312 | orchestrator | Saturday 28 March 2026 05:17:41 +0000 (0:00:02.709) 0:03:48.199 ******** 2026-03-28 05:17:46.268323 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:17:46.268334 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:17:46.268345 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:17:46.268356 | orchestrator | 2026-03-28 05:17:46.268367 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-28 05:17:46.268378 | orchestrator | Saturday 28 March 2026 05:17:43 +0000 (0:00:01.397) 0:03:49.597 ******** 2026-03-28 05:17:46.268396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 05:17:46.268408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 05:17:46.268419 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.268430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 05:17:46.268441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 05:17:46.268452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.268463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 05:17:46.268473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 05:17:46.268484 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.268495 | orchestrator | 2026-03-28 05:17:46.268506 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-28 05:17:46.268517 | orchestrator | Saturday 28 March 2026 05:17:44 +0000 (0:00:01.402) 0:03:51.000 ******** 2026-03-28 05:17:46.268530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:46.268574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268596 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:46.268607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:46.268630 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:46.268641 | orchestrator | 2026-03-28 05:17:46.268659 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-28 05:17:56.567422 | orchestrator | Saturday 28 March 2026 05:17:46 +0000 (0:00:01.674) 0:03:52.674 ******** 2026-03-28 05:17:56.567554 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:56.567573 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:56.567586 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:56.567597 | orchestrator | 2026-03-28 05:17:56.568328 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-28 05:17:56.568350 | orchestrator | Saturday 28 March 2026 05:17:47 +0000 (0:00:01.398) 0:03:54.073 ******** 2026-03-28 05:17:56.568363 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:56.568374 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:56.568385 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:56.568396 | orchestrator | 2026-03-28 05:17:56.568408 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-28 05:17:56.568419 | orchestrator | Saturday 28 March 2026 05:17:49 +0000 (0:00:01.407) 0:03:55.480 ******** 2026-03-28 05:17:56.568430 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:56.568441 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:56.568453 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:56.568464 | orchestrator | 2026-03-28 05:17:56.568493 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-28 05:17:56.568504 | orchestrator | Saturday 28 March 2026 05:17:50 +0000 (0:00:01.382) 0:03:56.863 ******** 2026-03-28 05:17:56.568516 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:56.568527 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:56.568538 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:17:56.568549 | orchestrator | 2026-03-28 05:17:56.568560 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-28 05:17:56.568571 | orchestrator | Saturday 28 March 2026 05:17:51 +0000 (0:00:01.496) 0:03:58.359 ******** 2026-03-28 05:17:56.568582 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}) 2026-03-28 05:17:56.568619 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}) 2026-03-28 05:17:56.568631 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}) 2026-03-28 05:17:56.568642 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}) 2026-03-28 05:17:56.568653 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}) 2026-03-28 05:17:56.568664 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}) 2026-03-28 05:17:56.568675 | orchestrator | 2026-03-28 05:17:56.568686 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-28 05:17:56.568698 | orchestrator | Saturday 28 March 2026 05:17:55 +0000 (0:00:03.156) 0:04:01.515 ******** 2026-03-28 05:17:56.568715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-e94d822c-120c-5920-885f-96546946f9a0/osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1774667097.7891629, 'mtime': 1774667097.7831628, 'ctime': 1774667097.7831628, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-e94d822c-120c-5920-885f-96546946f9a0/osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:56.568760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb/osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1774667116.8434544, 'mtime': 1774667116.8364542, 'ctime': 1774667116.8364542, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb/osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:56.568774 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:17:56.568795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181/osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1774667098.392353, 'mtime': 1774667098.387353, 'ctime': 1774667098.387353, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181/osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:56.568808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41/osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1774667119.679689, 'mtime': 1774667119.6746888, 'ctime': 1774667119.6746888, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41/osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}, 'ansible_loop_var': 'item'})  2026-03-28 05:17:56.568820 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:17:56.568840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5/osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1774667099.6987708, 'mtime': 1774667099.693771, 'ctime': 1774667099.693771, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5/osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.607793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-e38c52ab-9b1d-5b26-b141-c51106128b29/osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1774667118.864068, 'mtime': 1774667118.8590682, 'ctime': 1774667118.8590682, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-e38c52ab-9b1d-5b26-b141-c51106128b29/osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.607909 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:02.607977 | orchestrator | 2026-03-28 05:18:02.607992 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-28 05:18:02.608006 | orchestrator | Saturday 28 March 2026 05:17:56 +0000 (0:00:01.471) 0:04:02.987 ******** 2026-03-28 05:18:02.608018 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 05:18:02.608031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 05:18:02.608043 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:02.608055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 05:18:02.608066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 05:18:02.608078 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:02.608089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 05:18:02.608100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 05:18:02.608112 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:02.608123 | orchestrator | 2026-03-28 05:18:02.608135 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-28 05:18:02.608147 | orchestrator | Saturday 28 March 2026 05:17:58 +0000 (0:00:01.450) 0:04:04.437 ******** 2026-03-28 05:18:02.608161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608187 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:02.608198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608276 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:02.608288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608314 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:02.608327 | orchestrator | 2026-03-28 05:18:02.608340 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-28 05:18:02.608354 | orchestrator | Saturday 28 March 2026 05:17:59 +0000 (0:00:01.434) 0:04:05.871 ******** 2026-03-28 05:18:02.608367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'})  2026-03-28 05:18:02.608381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'})  2026-03-28 05:18:02.608393 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:02.608407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'})  2026-03-28 05:18:02.608420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'})  2026-03-28 05:18:02.608433 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:02.608445 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'})  2026-03-28 05:18:02.608459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'})  2026-03-28 05:18:02.608471 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:02.608484 | orchestrator | 2026-03-28 05:18:02.608497 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-28 05:18:02.608510 | orchestrator | Saturday 28 March 2026 05:18:01 +0000 (0:00:01.746) 0:04:07.618 ******** 2026-03-28 05:18:02.608524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-e94d822c-120c-5920-885f-96546946f9a0', 'data_vg': 'ceph-e94d822c-120c-5920-885f-96546946f9a0'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-97a2d1a8-b450-5e97-9b32-db4bafa583cb', 'data_vg': 'ceph-97a2d1a8-b450-5e97-9b32-db4bafa583cb'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608558 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:02.608571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-80a8d2d8-5d5c-5988-8f38-8985bde94181', 'data_vg': 'ceph-80a8d2d8-5d5c-5988-8f38-8985bde94181'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608585 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41', 'data_vg': 'ceph-9e2c40d7-ed5b-5b0c-9c02-6c53c9658e41'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608598 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:02.608612 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-988a6493-5e43-51ae-8e8a-a4936b4cd9b5', 'data_vg': 'ceph-988a6493-5e43-51ae-8e8a-a4936b4cd9b5'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:02.608638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-e38c52ab-9b1d-5b26-b141-c51106128b29', 'data_vg': 'ceph-e38c52ab-9b1d-5b26-b141-c51106128b29'}, 'ansible_loop_var': 'item'})  2026-03-28 05:18:12.350153 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:12.350261 | orchestrator | 2026-03-28 05:18:12.350279 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-28 05:18:12.350292 | orchestrator | Saturday 28 March 2026 05:18:02 +0000 (0:00:01.406) 0:04:09.024 ******** 2026-03-28 05:18:12.350303 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:12.350314 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:12.350325 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:12.350337 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:12.350348 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:12.350358 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:12.350369 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:12.350380 | orchestrator | 2026-03-28 05:18:12.350392 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-28 05:18:12.350403 | orchestrator | Saturday 28 March 2026 05:18:04 +0000 (0:00:01.908) 0:04:10.933 ******** 2026-03-28 05:18:12.350414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:12.350425 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:12.350436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:12.350447 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:12.350458 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 05:18:12.350469 | orchestrator | 2026-03-28 05:18:12.350480 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-28 05:18:12.350491 | orchestrator | Saturday 28 March 2026 05:18:07 +0000 (0:00:02.681) 0:04:13.615 ******** 2026-03-28 05:18:12.350503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350589 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:12.350602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350666 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:12.350678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350740 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:12.350753 | orchestrator | 2026-03-28 05:18:12.350767 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-28 05:18:12.350780 | orchestrator | Saturday 28 March 2026 05:18:08 +0000 (0:00:01.406) 0:04:15.021 ******** 2026-03-28 05:18:12.350793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350890 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:12.350902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.350981 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:12.350992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351055 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:12.351066 | orchestrator | 2026-03-28 05:18:12.351077 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-28 05:18:12.351088 | orchestrator | Saturday 28 March 2026 05:18:10 +0000 (0:00:01.836) 0:04:16.858 ******** 2026-03-28 05:18:12.351099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351154 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:12.351165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351219 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:12.351230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 05:18:12.351284 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:12.351295 | orchestrator | 2026-03-28 05:18:12.351307 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-28 05:18:12.351323 | orchestrator | Saturday 28 March 2026 05:18:11 +0000 (0:00:01.457) 0:04:18.316 ******** 2026-03-28 05:18:12.351335 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:12.351346 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:12.351374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.988593 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.988711 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.988728 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.988740 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.988751 | orchestrator | 2026-03-28 05:18:27.988764 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-28 05:18:27.988790 | orchestrator | Saturday 28 March 2026 05:18:13 +0000 (0:00:01.848) 0:04:20.164 ******** 2026-03-28 05:18:27.988802 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.988813 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.988824 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.988835 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.988895 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.988907 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.988918 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.988929 | orchestrator | 2026-03-28 05:18:27.988940 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-28 05:18:27.988951 | orchestrator | Saturday 28 March 2026 05:18:16 +0000 (0:00:02.463) 0:04:22.628 ******** 2026-03-28 05:18:27.988962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.988973 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989090 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.989104 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.989115 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.989126 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.989137 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.989150 | orchestrator | 2026-03-28 05:18:27.989164 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-28 05:18:27.989177 | orchestrator | Saturday 28 March 2026 05:18:18 +0000 (0:00:02.208) 0:04:24.836 ******** 2026-03-28 05:18:27.989190 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.989202 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989214 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.989226 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.989239 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.989257 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.989277 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.989296 | orchestrator | 2026-03-28 05:18:27.989317 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-28 05:18:27.989338 | orchestrator | Saturday 28 March 2026 05:18:20 +0000 (0:00:01.965) 0:04:26.802 ******** 2026-03-28 05:18:27.989358 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.989377 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989389 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.989402 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.989415 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.989426 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.989437 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.989448 | orchestrator | 2026-03-28 05:18:27.989459 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-28 05:18:27.989470 | orchestrator | Saturday 28 March 2026 05:18:22 +0000 (0:00:02.317) 0:04:29.120 ******** 2026-03-28 05:18:27.989482 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.989494 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989504 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.989516 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.989527 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.989538 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.989549 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.989559 | orchestrator | 2026-03-28 05:18:27.989571 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-28 05:18:27.989606 | orchestrator | Saturday 28 March 2026 05:18:24 +0000 (0:00:02.115) 0:04:31.236 ******** 2026-03-28 05:18:27.989618 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.989629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989640 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.989650 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:27.989661 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:27.989672 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:27.989683 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:27.989694 | orchestrator | 2026-03-28 05:18:27.989704 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-28 05:18:27.989716 | orchestrator | Saturday 28 March 2026 05:18:27 +0000 (0:00:02.254) 0:04:33.491 ******** 2026-03-28 05:18:27.989728 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:27.989741 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:27.989753 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:27.989766 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:27.989777 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:27.989807 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:27.989819 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:27.989894 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:27.989907 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:27.989917 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:27.989928 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:27.989939 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:27.989951 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:27.989962 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:27.989973 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:27.989984 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:27.989994 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:27.990079 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:27.990095 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:27.990106 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:27.990117 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:27.990129 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:27.990140 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:27.990150 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:27.990161 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:27.990172 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:27.990183 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:27.990195 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:27.990206 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:27.990223 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:27.990244 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.395499 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.395606 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.395625 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.395637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:32.395650 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.395661 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:32.395673 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.395711 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.395724 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.395734 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.395745 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.395756 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.395767 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.395778 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.395789 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:32.395800 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.395811 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.395855 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:32.395869 | orchestrator | 2026-03-28 05:18:32.395881 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-28 05:18:32.395894 | orchestrator | Saturday 28 March 2026 05:18:29 +0000 (0:00:02.197) 0:04:35.689 ******** 2026-03-28 05:18:32.395905 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:32.395916 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:32.395927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:32.395937 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:18:32.395948 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:18:32.395959 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:18:32.395969 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:18:32.395980 | orchestrator | 2026-03-28 05:18:32.395991 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-28 05:18:32.396002 | orchestrator | Saturday 28 March 2026 05:18:31 +0000 (0:00:02.241) 0:04:37.931 ******** 2026-03-28 05:18:32.396013 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.396024 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.396052 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.396083 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.396097 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.396118 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.396130 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:18:32.396143 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.396155 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.396168 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.396180 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.396192 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.396205 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.396217 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:18:32.396230 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.396242 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.396254 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.396266 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.396279 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.396291 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.396304 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:18:32.396316 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.396329 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:18:32.396340 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:18:32.396353 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:18:32.396366 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:18:32.396385 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:18:32.396401 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:18:32.396421 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:19:05.820711 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:19:05.820821 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:19:05.820836 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.820849 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:19:05.820860 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:19:05.820872 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:19:05.820884 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:19:05.820894 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-28 05:19:05.820904 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-28 05:19:05.820914 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-28 05:19:05.820924 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:19:05.820934 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:19:05.820944 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:19:05.820954 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.820964 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:19:05.820974 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.820984 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-28 05:19:05.820993 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-28 05:19:05.821026 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-28 05:19:05.821037 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821047 | orchestrator | 2026-03-28 05:19:05.821057 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-28 05:19:05.821068 | orchestrator | Saturday 28 March 2026 05:18:33 +0000 (0:00:02.224) 0:04:40.156 ******** 2026-03-28 05:19:05.821077 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.821087 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.821097 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.821106 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.821116 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.821126 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821135 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.821145 | orchestrator | 2026-03-28 05:19:05.821154 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-28 05:19:05.821164 | orchestrator | Saturday 28 March 2026 05:18:36 +0000 (0:00:02.322) 0:04:42.479 ******** 2026-03-28 05:19:05.821174 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.821197 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.821209 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.821221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.821232 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.821244 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821256 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.821267 | orchestrator | 2026-03-28 05:19:05.821279 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-28 05:19:05.821306 | orchestrator | Saturday 28 March 2026 05:18:39 +0000 (0:00:02.965) 0:04:45.445 ******** 2026-03-28 05:19:05.821318 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.821329 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.821340 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.821351 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.821362 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.821373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821384 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.821396 | orchestrator | 2026-03-28 05:19:05.821407 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-28 05:19:05.821418 | orchestrator | Saturday 28 March 2026 05:18:41 +0000 (0:00:02.553) 0:04:47.998 ******** 2026-03-28 05:19:05.821430 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-28 05:19:05.821443 | orchestrator | 2026-03-28 05:19:05.821454 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-28 05:19:05.821466 | orchestrator | Saturday 28 March 2026 05:18:44 +0000 (0:00:03.212) 0:04:51.211 ******** 2026-03-28 05:19:05.821478 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821490 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821501 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821512 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821524 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821535 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821546 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-28 05:19:05.821565 | orchestrator | 2026-03-28 05:19:05.821575 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-28 05:19:05.821585 | orchestrator | Saturday 28 March 2026 05:18:47 +0000 (0:00:02.349) 0:04:53.560 ******** 2026-03-28 05:19:05.821594 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.821604 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.821613 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.821623 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.821633 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.821642 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821652 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.821694 | orchestrator | 2026-03-28 05:19:05.821704 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-28 05:19:05.821714 | orchestrator | Saturday 28 March 2026 05:18:49 +0000 (0:00:02.413) 0:04:55.973 ******** 2026-03-28 05:19:05.821724 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.821734 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.821743 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.821756 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.821773 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.821790 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.821806 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.821820 | orchestrator | 2026-03-28 05:19:05.821835 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-28 05:19:05.821850 | orchestrator | Saturday 28 March 2026 05:18:51 +0000 (0:00:02.253) 0:04:58.227 ******** 2026-03-28 05:19:05.821866 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:05.821881 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:19:05.821896 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:19:05.821911 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:19:05.821927 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:19:05.821942 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:19:05.821958 | orchestrator | ok: [testbed-manager] 2026-03-28 05:19:05.821972 | orchestrator | 2026-03-28 05:19:05.821986 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-28 05:19:05.822001 | orchestrator | Saturday 28 March 2026 05:18:54 +0000 (0:00:02.986) 0:05:01.214 ******** 2026-03-28 05:19:05.822083 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.822107 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.822123 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.822140 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.822156 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.822173 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.822191 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.822207 | orchestrator | 2026-03-28 05:19:05.822225 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-28 05:19:05.822235 | orchestrator | Saturday 28 March 2026 05:18:57 +0000 (0:00:02.919) 0:05:04.133 ******** 2026-03-28 05:19:05.822245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.822255 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:19:05.822264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:19:05.822274 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:19:05.822284 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:19:05.822293 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:19:05.822303 | orchestrator | skipping: [testbed-manager] 2026-03-28 05:19:05.822313 | orchestrator | 2026-03-28 05:19:05.822322 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-28 05:19:05.822341 | orchestrator | Saturday 28 March 2026 05:19:00 +0000 (0:00:02.306) 0:05:06.440 ******** 2026-03-28 05:19:05.822351 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:05.822361 | orchestrator | 2026-03-28 05:19:05.822374 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-28 05:19:05.822385 | orchestrator | Saturday 28 March 2026 05:19:02 +0000 (0:00:02.797) 0:05:09.238 ******** 2026-03-28 05:19:05.822405 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:05.822415 | orchestrator | 2026-03-28 05:19:05.822437 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-28 05:19:46.386777 | orchestrator | 2026-03-28 05:19:46.386893 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:19:46.386909 | orchestrator | Saturday 28 March 2026 05:19:05 +0000 (0:00:02.996) 0:05:12.234 ******** 2026-03-28 05:19:46.386921 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.386934 | orchestrator | 2026-03-28 05:19:46.386945 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:19:46.386957 | orchestrator | Saturday 28 March 2026 05:19:07 +0000 (0:00:01.545) 0:05:13.780 ******** 2026-03-28 05:19:46.386968 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.386978 | orchestrator | 2026-03-28 05:19:46.386990 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-28 05:19:46.387000 | orchestrator | Saturday 28 March 2026 05:19:08 +0000 (0:00:01.153) 0:05:14.934 ******** 2026-03-28 05:19:46.387013 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:19:46.387028 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:19:46.387039 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 05:19:46.387050 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 05:19:46.387063 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 05:19:46.387076 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}])  2026-03-28 05:19:46.387089 | orchestrator | 2026-03-28 05:19:46.387100 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-28 05:19:46.387111 | orchestrator | 2026-03-28 05:19:46.387122 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-28 05:19:46.387133 | orchestrator | Saturday 28 March 2026 05:19:19 +0000 (0:00:10.690) 0:05:25.624 ******** 2026-03-28 05:19:46.387144 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387155 | orchestrator | 2026-03-28 05:19:46.387165 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-28 05:19:46.387199 | orchestrator | Saturday 28 March 2026 05:19:20 +0000 (0:00:01.525) 0:05:27.150 ******** 2026-03-28 05:19:46.387211 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387222 | orchestrator | 2026-03-28 05:19:46.387233 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-28 05:19:46.387244 | orchestrator | Saturday 28 March 2026 05:19:21 +0000 (0:00:01.171) 0:05:28.322 ******** 2026-03-28 05:19:46.387255 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:46.387265 | orchestrator | 2026-03-28 05:19:46.387277 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-28 05:19:46.387287 | orchestrator | Saturday 28 March 2026 05:19:23 +0000 (0:00:01.209) 0:05:29.531 ******** 2026-03-28 05:19:46.387298 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387309 | orchestrator | 2026-03-28 05:19:46.387334 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:19:46.387346 | orchestrator | Saturday 28 March 2026 05:19:24 +0000 (0:00:01.190) 0:05:30.722 ******** 2026-03-28 05:19:46.387357 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-28 05:19:46.387367 | orchestrator | 2026-03-28 05:19:46.387378 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:19:46.387407 | orchestrator | Saturday 28 March 2026 05:19:25 +0000 (0:00:01.137) 0:05:31.860 ******** 2026-03-28 05:19:46.387419 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387430 | orchestrator | 2026-03-28 05:19:46.387440 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:19:46.387451 | orchestrator | Saturday 28 March 2026 05:19:26 +0000 (0:00:01.479) 0:05:33.340 ******** 2026-03-28 05:19:46.387462 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387499 | orchestrator | 2026-03-28 05:19:46.387511 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:19:46.387523 | orchestrator | Saturday 28 March 2026 05:19:28 +0000 (0:00:01.190) 0:05:34.530 ******** 2026-03-28 05:19:46.387533 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387544 | orchestrator | 2026-03-28 05:19:46.387555 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:19:46.387566 | orchestrator | Saturday 28 March 2026 05:19:29 +0000 (0:00:01.536) 0:05:36.067 ******** 2026-03-28 05:19:46.387576 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387587 | orchestrator | 2026-03-28 05:19:46.387598 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:19:46.387608 | orchestrator | Saturday 28 March 2026 05:19:30 +0000 (0:00:01.207) 0:05:37.275 ******** 2026-03-28 05:19:46.387619 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387630 | orchestrator | 2026-03-28 05:19:46.387641 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:19:46.387652 | orchestrator | Saturday 28 March 2026 05:19:31 +0000 (0:00:01.146) 0:05:38.421 ******** 2026-03-28 05:19:46.387662 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387673 | orchestrator | 2026-03-28 05:19:46.387684 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:19:46.387695 | orchestrator | Saturday 28 March 2026 05:19:33 +0000 (0:00:01.177) 0:05:39.599 ******** 2026-03-28 05:19:46.387706 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:46.387717 | orchestrator | 2026-03-28 05:19:46.387728 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:19:46.387738 | orchestrator | Saturday 28 March 2026 05:19:34 +0000 (0:00:01.147) 0:05:40.746 ******** 2026-03-28 05:19:46.387749 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387760 | orchestrator | 2026-03-28 05:19:46.387770 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:19:46.387781 | orchestrator | Saturday 28 March 2026 05:19:35 +0000 (0:00:01.134) 0:05:41.881 ******** 2026-03-28 05:19:46.387792 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:19:46.387811 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:19:46.387822 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:19:46.387833 | orchestrator | 2026-03-28 05:19:46.387844 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:19:46.387854 | orchestrator | Saturday 28 March 2026 05:19:37 +0000 (0:00:01.717) 0:05:43.598 ******** 2026-03-28 05:19:46.387865 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:19:46.387876 | orchestrator | 2026-03-28 05:19:46.387887 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:19:46.387897 | orchestrator | Saturday 28 March 2026 05:19:38 +0000 (0:00:01.221) 0:05:44.819 ******** 2026-03-28 05:19:46.387908 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:19:46.387919 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:19:46.387930 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:19:46.387940 | orchestrator | 2026-03-28 05:19:46.387951 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:19:46.387961 | orchestrator | Saturday 28 March 2026 05:19:41 +0000 (0:00:03.261) 0:05:48.081 ******** 2026-03-28 05:19:46.387972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:19:46.387983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:19:46.387994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:19:46.388005 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:46.388015 | orchestrator | 2026-03-28 05:19:46.388026 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:19:46.388037 | orchestrator | Saturday 28 March 2026 05:19:43 +0000 (0:00:01.458) 0:05:49.540 ******** 2026-03-28 05:19:46.388048 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:19:46.388062 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:19:46.388073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:19:46.388090 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:19:46.388101 | orchestrator | 2026-03-28 05:19:46.388112 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:19:46.388123 | orchestrator | Saturday 28 March 2026 05:19:45 +0000 (0:00:02.090) 0:05:51.630 ******** 2026-03-28 05:19:46.388143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:07.086784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:07.086920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:07.086964 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.086978 | orchestrator | 2026-03-28 05:20:07.086991 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:20:07.087003 | orchestrator | Saturday 28 March 2026 05:19:46 +0000 (0:00:01.176) 0:05:52.807 ******** 2026-03-28 05:20:07.087017 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a580dbf75b8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:19:38.966723', 'end': '2026-03-28 05:19:39.026504', 'delta': '0:00:00.059781', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a580dbf75b8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:20:07.087031 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '63c01d28d51e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:19:39.558645', 'end': '2026-03-28 05:19:39.607377', 'delta': '0:00:00.048732', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63c01d28d51e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:20:07.087043 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '99ef085e2de2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:19:40.475602', 'end': '2026-03-28 05:19:40.513889', 'delta': '0:00:00.038287', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['99ef085e2de2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:20:07.087055 | orchestrator | 2026-03-28 05:20:07.087066 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:20:07.087092 | orchestrator | Saturday 28 March 2026 05:19:47 +0000 (0:00:01.299) 0:05:54.106 ******** 2026-03-28 05:20:07.087104 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:07.087116 | orchestrator | 2026-03-28 05:20:07.087127 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:20:07.087138 | orchestrator | Saturday 28 March 2026 05:19:48 +0000 (0:00:01.277) 0:05:55.383 ******** 2026-03-28 05:20:07.087149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087159 | orchestrator | 2026-03-28 05:20:07.087170 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:20:07.087181 | orchestrator | Saturday 28 March 2026 05:19:50 +0000 (0:00:01.228) 0:05:56.612 ******** 2026-03-28 05:20:07.087192 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:07.087203 | orchestrator | 2026-03-28 05:20:07.087213 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:20:07.087232 | orchestrator | Saturday 28 March 2026 05:19:51 +0000 (0:00:01.139) 0:05:57.752 ******** 2026-03-28 05:20:07.087261 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-28 05:20:07.087273 | orchestrator | 2026-03-28 05:20:07.087284 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:20:07.087295 | orchestrator | Saturday 28 March 2026 05:19:53 +0000 (0:00:02.461) 0:06:00.213 ******** 2026-03-28 05:20:07.087307 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:07.087318 | orchestrator | 2026-03-28 05:20:07.087331 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:20:07.087344 | orchestrator | Saturday 28 March 2026 05:19:54 +0000 (0:00:01.207) 0:06:01.420 ******** 2026-03-28 05:20:07.087357 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087370 | orchestrator | 2026-03-28 05:20:07.087403 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:20:07.087417 | orchestrator | Saturday 28 March 2026 05:19:56 +0000 (0:00:01.204) 0:06:02.625 ******** 2026-03-28 05:20:07.087430 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087444 | orchestrator | 2026-03-28 05:20:07.087457 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:20:07.087470 | orchestrator | Saturday 28 March 2026 05:19:57 +0000 (0:00:01.301) 0:06:03.927 ******** 2026-03-28 05:20:07.087483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087496 | orchestrator | 2026-03-28 05:20:07.087509 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:20:07.087521 | orchestrator | Saturday 28 March 2026 05:19:58 +0000 (0:00:01.201) 0:06:05.129 ******** 2026-03-28 05:20:07.087534 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087547 | orchestrator | 2026-03-28 05:20:07.087560 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:20:07.087573 | orchestrator | Saturday 28 March 2026 05:19:59 +0000 (0:00:01.136) 0:06:06.265 ******** 2026-03-28 05:20:07.087587 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087599 | orchestrator | 2026-03-28 05:20:07.087612 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:20:07.087625 | orchestrator | Saturday 28 March 2026 05:20:01 +0000 (0:00:01.226) 0:06:07.492 ******** 2026-03-28 05:20:07.087638 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087651 | orchestrator | 2026-03-28 05:20:07.087664 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:20:07.087678 | orchestrator | Saturday 28 March 2026 05:20:02 +0000 (0:00:01.141) 0:06:08.634 ******** 2026-03-28 05:20:07.087691 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087702 | orchestrator | 2026-03-28 05:20:07.087713 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:20:07.087724 | orchestrator | Saturday 28 March 2026 05:20:03 +0000 (0:00:01.172) 0:06:09.806 ******** 2026-03-28 05:20:07.087735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087746 | orchestrator | 2026-03-28 05:20:07.087757 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:20:07.087769 | orchestrator | Saturday 28 March 2026 05:20:04 +0000 (0:00:01.139) 0:06:10.945 ******** 2026-03-28 05:20:07.087780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:07.087791 | orchestrator | 2026-03-28 05:20:07.087802 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:20:07.087813 | orchestrator | Saturday 28 March 2026 05:20:05 +0000 (0:00:01.235) 0:06:12.181 ******** 2026-03-28 05:20:07.087825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:07.087845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:07.087863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:07.087876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:20:07.087897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:08.284669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:08.284756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:08.284776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:20:08.284828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:08.284859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:20:08.284881 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:08.284895 | orchestrator | 2026-03-28 05:20:08.284908 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:20:08.284920 | orchestrator | Saturday 28 March 2026 05:20:07 +0000 (0:00:01.304) 0:06:13.485 ******** 2026-03-28 05:20:08.284949 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.284963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.285001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.285025 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.285043 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.285055 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:08.285075 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:32.971471 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:32.971665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:32.971694 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:20:32.971713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.971732 | orchestrator | 2026-03-28 05:20:32.971750 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:20:32.971766 | orchestrator | Saturday 28 March 2026 05:20:08 +0000 (0:00:01.223) 0:06:14.708 ******** 2026-03-28 05:20:32.971784 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:32.971802 | orchestrator | 2026-03-28 05:20:32.971819 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:20:32.971836 | orchestrator | Saturday 28 March 2026 05:20:09 +0000 (0:00:01.513) 0:06:16.222 ******** 2026-03-28 05:20:32.971853 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:32.971870 | orchestrator | 2026-03-28 05:20:32.971889 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:20:32.971929 | orchestrator | Saturday 28 March 2026 05:20:10 +0000 (0:00:01.122) 0:06:17.344 ******** 2026-03-28 05:20:32.971948 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:20:32.971967 | orchestrator | 2026-03-28 05:20:32.971985 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:20:32.972004 | orchestrator | Saturday 28 March 2026 05:20:12 +0000 (0:00:01.544) 0:06:18.888 ******** 2026-03-28 05:20:32.972022 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972040 | orchestrator | 2026-03-28 05:20:32.972058 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:20:32.972076 | orchestrator | Saturday 28 March 2026 05:20:13 +0000 (0:00:01.196) 0:06:20.085 ******** 2026-03-28 05:20:32.972095 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972113 | orchestrator | 2026-03-28 05:20:32.972132 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:20:32.972165 | orchestrator | Saturday 28 March 2026 05:20:15 +0000 (0:00:01.358) 0:06:21.444 ******** 2026-03-28 05:20:32.972184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972202 | orchestrator | 2026-03-28 05:20:32.972220 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:20:32.972239 | orchestrator | Saturday 28 March 2026 05:20:16 +0000 (0:00:01.222) 0:06:22.666 ******** 2026-03-28 05:20:32.972256 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:20:32.972276 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 05:20:32.972322 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 05:20:32.972338 | orchestrator | 2026-03-28 05:20:32.972355 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:20:32.972370 | orchestrator | Saturday 28 March 2026 05:20:18 +0000 (0:00:02.014) 0:06:24.680 ******** 2026-03-28 05:20:32.972387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:20:32.972404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:20:32.972421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:20:32.972438 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972454 | orchestrator | 2026-03-28 05:20:32.972470 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:20:32.972486 | orchestrator | Saturday 28 March 2026 05:20:19 +0000 (0:00:01.172) 0:06:25.853 ******** 2026-03-28 05:20:32.972502 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972519 | orchestrator | 2026-03-28 05:20:32.972537 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:20:32.972554 | orchestrator | Saturday 28 March 2026 05:20:20 +0000 (0:00:01.207) 0:06:27.061 ******** 2026-03-28 05:20:32.972572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:20:32.972587 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:20:32.972606 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:20:32.972623 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:20:32.972640 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:20:32.972657 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:20:32.972672 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:20:32.972687 | orchestrator | 2026-03-28 05:20:32.972703 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:20:32.972718 | orchestrator | Saturday 28 March 2026 05:20:22 +0000 (0:00:02.157) 0:06:29.219 ******** 2026-03-28 05:20:32.972733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:20:32.972749 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:20:32.972773 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:20:32.972789 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:20:32.972805 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:20:32.972820 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:20:32.972835 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:20:32.972849 | orchestrator | 2026-03-28 05:20:32.972864 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-28 05:20:32.972878 | orchestrator | Saturday 28 March 2026 05:20:25 +0000 (0:00:03.175) 0:06:32.395 ******** 2026-03-28 05:20:32.972894 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-28 05:20:32.972910 | orchestrator | 2026-03-28 05:20:32.972924 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-28 05:20:32.972950 | orchestrator | Saturday 28 March 2026 05:20:28 +0000 (0:00:02.297) 0:06:34.693 ******** 2026-03-28 05:20:32.972965 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.972981 | orchestrator | 2026-03-28 05:20:32.972996 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-28 05:20:32.973012 | orchestrator | Saturday 28 March 2026 05:20:29 +0000 (0:00:01.290) 0:06:35.984 ******** 2026-03-28 05:20:32.973027 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:20:32.973042 | orchestrator | 2026-03-28 05:20:32.973057 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-28 05:20:32.973073 | orchestrator | Saturday 28 March 2026 05:20:30 +0000 (0:00:01.121) 0:06:37.106 ******** 2026-03-28 05:20:32.973088 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-28 05:20:32.973103 | orchestrator | 2026-03-28 05:20:32.973119 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-28 05:20:32.973146 | orchestrator | Saturday 28 March 2026 05:20:32 +0000 (0:00:02.283) 0:06:39.389 ******** 2026-03-28 05:21:36.061475 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061594 | orchestrator | 2026-03-28 05:21:36.061602 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-28 05:21:36.061609 | orchestrator | Saturday 28 March 2026 05:20:34 +0000 (0:00:01.139) 0:06:40.529 ******** 2026-03-28 05:21:36.061615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:21:36.061620 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:21:36.061627 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:21:36.061631 | orchestrator | 2026-03-28 05:21:36.061636 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-28 05:21:36.061641 | orchestrator | Saturday 28 March 2026 05:20:36 +0000 (0:00:02.631) 0:06:43.161 ******** 2026-03-28 05:21:36.061645 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-28 05:21:36.061650 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-28 05:21:36.061656 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-28 05:21:36.061660 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-28 05:21:36.061665 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-28 05:21:36.061671 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-28 05:21:36.061675 | orchestrator | 2026-03-28 05:21:36.061680 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-28 05:21:36.061685 | orchestrator | Saturday 28 March 2026 05:20:50 +0000 (0:00:13.501) 0:06:56.662 ******** 2026-03-28 05:21:36.061689 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:21:36.061694 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:21:36.061699 | orchestrator | 2026-03-28 05:21:36.061703 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-28 05:21:36.061709 | orchestrator | Saturday 28 March 2026 05:20:54 +0000 (0:00:03.893) 0:07:00.556 ******** 2026-03-28 05:21:36.061714 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:21:36.061719 | orchestrator | 2026-03-28 05:21:36.061723 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:21:36.061728 | orchestrator | Saturday 28 March 2026 05:20:56 +0000 (0:00:02.688) 0:07:03.245 ******** 2026-03-28 05:21:36.061732 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-28 05:21:36.061737 | orchestrator | 2026-03-28 05:21:36.061742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:21:36.061767 | orchestrator | Saturday 28 March 2026 05:20:58 +0000 (0:00:01.520) 0:07:04.766 ******** 2026-03-28 05:21:36.061772 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-28 05:21:36.061777 | orchestrator | 2026-03-28 05:21:36.061781 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:21:36.061786 | orchestrator | Saturday 28 March 2026 05:21:00 +0000 (0:00:01.799) 0:07:06.565 ******** 2026-03-28 05:21:36.061790 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.061795 | orchestrator | 2026-03-28 05:21:36.061800 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:21:36.061804 | orchestrator | Saturday 28 March 2026 05:21:01 +0000 (0:00:01.571) 0:07:08.137 ******** 2026-03-28 05:21:36.061809 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061813 | orchestrator | 2026-03-28 05:21:36.061832 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:21:36.061837 | orchestrator | Saturday 28 March 2026 05:21:02 +0000 (0:00:01.158) 0:07:09.296 ******** 2026-03-28 05:21:36.061841 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061846 | orchestrator | 2026-03-28 05:21:36.061850 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:21:36.061855 | orchestrator | Saturday 28 March 2026 05:21:03 +0000 (0:00:01.112) 0:07:10.408 ******** 2026-03-28 05:21:36.061860 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061864 | orchestrator | 2026-03-28 05:21:36.061869 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:21:36.061874 | orchestrator | Saturday 28 March 2026 05:21:05 +0000 (0:00:01.208) 0:07:11.617 ******** 2026-03-28 05:21:36.061878 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.061883 | orchestrator | 2026-03-28 05:21:36.061887 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:21:36.061892 | orchestrator | Saturday 28 March 2026 05:21:06 +0000 (0:00:01.583) 0:07:13.200 ******** 2026-03-28 05:21:36.061896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061901 | orchestrator | 2026-03-28 05:21:36.061906 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:21:36.061910 | orchestrator | Saturday 28 March 2026 05:21:07 +0000 (0:00:01.173) 0:07:14.374 ******** 2026-03-28 05:21:36.061915 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061919 | orchestrator | 2026-03-28 05:21:36.061924 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:21:36.061928 | orchestrator | Saturday 28 March 2026 05:21:09 +0000 (0:00:01.122) 0:07:15.496 ******** 2026-03-28 05:21:36.061933 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.061937 | orchestrator | 2026-03-28 05:21:36.061942 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:21:36.061947 | orchestrator | Saturday 28 March 2026 05:21:10 +0000 (0:00:01.586) 0:07:17.083 ******** 2026-03-28 05:21:36.061951 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.061956 | orchestrator | 2026-03-28 05:21:36.061974 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:21:36.061979 | orchestrator | Saturday 28 March 2026 05:21:12 +0000 (0:00:01.682) 0:07:18.766 ******** 2026-03-28 05:21:36.061984 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.061988 | orchestrator | 2026-03-28 05:21:36.061993 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:21:36.061998 | orchestrator | Saturday 28 March 2026 05:21:13 +0000 (0:00:01.144) 0:07:19.911 ******** 2026-03-28 05:21:36.062003 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.062008 | orchestrator | 2026-03-28 05:21:36.062013 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:21:36.062069 | orchestrator | Saturday 28 March 2026 05:21:14 +0000 (0:00:01.231) 0:07:21.142 ******** 2026-03-28 05:21:36.062074 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062080 | orchestrator | 2026-03-28 05:21:36.062085 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:21:36.062095 | orchestrator | Saturday 28 March 2026 05:21:16 +0000 (0:00:01.331) 0:07:22.473 ******** 2026-03-28 05:21:36.062100 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062106 | orchestrator | 2026-03-28 05:21:36.062111 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:21:36.062117 | orchestrator | Saturday 28 March 2026 05:21:17 +0000 (0:00:01.188) 0:07:23.662 ******** 2026-03-28 05:21:36.062122 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062128 | orchestrator | 2026-03-28 05:21:36.062133 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:21:36.062138 | orchestrator | Saturday 28 March 2026 05:21:18 +0000 (0:00:01.159) 0:07:24.821 ******** 2026-03-28 05:21:36.062144 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062149 | orchestrator | 2026-03-28 05:21:36.062154 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:21:36.062160 | orchestrator | Saturday 28 March 2026 05:21:19 +0000 (0:00:01.167) 0:07:25.989 ******** 2026-03-28 05:21:36.062165 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062170 | orchestrator | 2026-03-28 05:21:36.062176 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:21:36.062181 | orchestrator | Saturday 28 March 2026 05:21:20 +0000 (0:00:01.128) 0:07:27.118 ******** 2026-03-28 05:21:36.062187 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.062192 | orchestrator | 2026-03-28 05:21:36.062197 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:21:36.062203 | orchestrator | Saturday 28 March 2026 05:21:21 +0000 (0:00:01.150) 0:07:28.269 ******** 2026-03-28 05:21:36.062208 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.062213 | orchestrator | 2026-03-28 05:21:36.062219 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:21:36.062224 | orchestrator | Saturday 28 March 2026 05:21:23 +0000 (0:00:01.192) 0:07:29.462 ******** 2026-03-28 05:21:36.062230 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:21:36.062235 | orchestrator | 2026-03-28 05:21:36.062241 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:21:36.062246 | orchestrator | Saturday 28 March 2026 05:21:24 +0000 (0:00:01.182) 0:07:30.644 ******** 2026-03-28 05:21:36.062251 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062257 | orchestrator | 2026-03-28 05:21:36.062262 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:21:36.062268 | orchestrator | Saturday 28 March 2026 05:21:25 +0000 (0:00:01.162) 0:07:31.807 ******** 2026-03-28 05:21:36.062273 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062278 | orchestrator | 2026-03-28 05:21:36.062283 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:21:36.062289 | orchestrator | Saturday 28 March 2026 05:21:26 +0000 (0:00:01.133) 0:07:32.941 ******** 2026-03-28 05:21:36.062294 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062299 | orchestrator | 2026-03-28 05:21:36.062305 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:21:36.062310 | orchestrator | Saturday 28 March 2026 05:21:27 +0000 (0:00:01.198) 0:07:34.139 ******** 2026-03-28 05:21:36.062319 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062324 | orchestrator | 2026-03-28 05:21:36.062330 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:21:36.062335 | orchestrator | Saturday 28 March 2026 05:21:28 +0000 (0:00:01.180) 0:07:35.319 ******** 2026-03-28 05:21:36.062341 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062346 | orchestrator | 2026-03-28 05:21:36.062351 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:21:36.062357 | orchestrator | Saturday 28 March 2026 05:21:30 +0000 (0:00:01.263) 0:07:36.583 ******** 2026-03-28 05:21:36.062362 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062368 | orchestrator | 2026-03-28 05:21:36.062372 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:21:36.062381 | orchestrator | Saturday 28 March 2026 05:21:31 +0000 (0:00:01.204) 0:07:37.787 ******** 2026-03-28 05:21:36.062385 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062390 | orchestrator | 2026-03-28 05:21:36.062395 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:21:36.062399 | orchestrator | Saturday 28 March 2026 05:21:32 +0000 (0:00:01.170) 0:07:38.957 ******** 2026-03-28 05:21:36.062404 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062408 | orchestrator | 2026-03-28 05:21:36.062413 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:21:36.062418 | orchestrator | Saturday 28 March 2026 05:21:33 +0000 (0:00:01.258) 0:07:40.216 ******** 2026-03-28 05:21:36.062422 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062427 | orchestrator | 2026-03-28 05:21:36.062432 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:21:36.062436 | orchestrator | Saturday 28 March 2026 05:21:34 +0000 (0:00:01.126) 0:07:41.343 ******** 2026-03-28 05:21:36.062441 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:21:36.062445 | orchestrator | 2026-03-28 05:21:36.062450 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:21:36.062455 | orchestrator | Saturday 28 March 2026 05:21:36 +0000 (0:00:01.135) 0:07:42.478 ******** 2026-03-28 05:22:28.930441 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.930558 | orchestrator | 2026-03-28 05:22:28.930576 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:22:28.930590 | orchestrator | Saturday 28 March 2026 05:21:37 +0000 (0:00:01.130) 0:07:43.609 ******** 2026-03-28 05:22:28.930602 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.930613 | orchestrator | 2026-03-28 05:22:28.930624 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:22:28.930636 | orchestrator | Saturday 28 March 2026 05:21:38 +0000 (0:00:01.153) 0:07:44.763 ******** 2026-03-28 05:22:28.930648 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.930660 | orchestrator | 2026-03-28 05:22:28.930671 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:22:28.930681 | orchestrator | Saturday 28 March 2026 05:21:40 +0000 (0:00:02.015) 0:07:46.778 ******** 2026-03-28 05:22:28.930692 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.930703 | orchestrator | 2026-03-28 05:22:28.930714 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:22:28.930725 | orchestrator | Saturday 28 March 2026 05:21:42 +0000 (0:00:02.543) 0:07:49.322 ******** 2026-03-28 05:22:28.930736 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-28 05:22:28.930747 | orchestrator | 2026-03-28 05:22:28.930758 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:22:28.930769 | orchestrator | Saturday 28 March 2026 05:21:44 +0000 (0:00:01.503) 0:07:50.826 ******** 2026-03-28 05:22:28.930780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.930791 | orchestrator | 2026-03-28 05:22:28.930801 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:22:28.930816 | orchestrator | Saturday 28 March 2026 05:21:45 +0000 (0:00:01.262) 0:07:52.088 ******** 2026-03-28 05:22:28.930834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.930852 | orchestrator | 2026-03-28 05:22:28.930915 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:22:28.930935 | orchestrator | Saturday 28 March 2026 05:21:46 +0000 (0:00:01.178) 0:07:53.267 ******** 2026-03-28 05:22:28.930953 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:22:28.930970 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:22:28.930988 | orchestrator | 2026-03-28 05:22:28.931006 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:22:28.931056 | orchestrator | Saturday 28 March 2026 05:21:48 +0000 (0:00:01.831) 0:07:55.099 ******** 2026-03-28 05:22:28.931078 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.931099 | orchestrator | 2026-03-28 05:22:28.931117 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:22:28.931135 | orchestrator | Saturday 28 March 2026 05:21:50 +0000 (0:00:01.703) 0:07:56.803 ******** 2026-03-28 05:22:28.931149 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931161 | orchestrator | 2026-03-28 05:22:28.931174 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:22:28.931186 | orchestrator | Saturday 28 March 2026 05:21:51 +0000 (0:00:01.184) 0:07:57.987 ******** 2026-03-28 05:22:28.931198 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931211 | orchestrator | 2026-03-28 05:22:28.931223 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:22:28.931235 | orchestrator | Saturday 28 March 2026 05:21:52 +0000 (0:00:01.167) 0:07:59.155 ******** 2026-03-28 05:22:28.931248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931259 | orchestrator | 2026-03-28 05:22:28.931271 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:22:28.931284 | orchestrator | Saturday 28 March 2026 05:21:53 +0000 (0:00:01.153) 0:08:00.308 ******** 2026-03-28 05:22:28.931311 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-28 05:22:28.931325 | orchestrator | 2026-03-28 05:22:28.931338 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:22:28.931349 | orchestrator | Saturday 28 March 2026 05:21:55 +0000 (0:00:01.589) 0:08:01.898 ******** 2026-03-28 05:22:28.931359 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.931370 | orchestrator | 2026-03-28 05:22:28.931381 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:22:28.931392 | orchestrator | Saturday 28 March 2026 05:21:57 +0000 (0:00:01.784) 0:08:03.682 ******** 2026-03-28 05:22:28.931403 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:22:28.931413 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:22:28.931424 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:22:28.931435 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931446 | orchestrator | 2026-03-28 05:22:28.931457 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:22:28.931467 | orchestrator | Saturday 28 March 2026 05:21:58 +0000 (0:00:01.137) 0:08:04.820 ******** 2026-03-28 05:22:28.931478 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931489 | orchestrator | 2026-03-28 05:22:28.931500 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:22:28.931511 | orchestrator | Saturday 28 March 2026 05:21:59 +0000 (0:00:01.141) 0:08:05.962 ******** 2026-03-28 05:22:28.931522 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931532 | orchestrator | 2026-03-28 05:22:28.931543 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:22:28.931554 | orchestrator | Saturday 28 March 2026 05:22:00 +0000 (0:00:01.167) 0:08:07.129 ******** 2026-03-28 05:22:28.931564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931575 | orchestrator | 2026-03-28 05:22:28.931586 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:22:28.931615 | orchestrator | Saturday 28 March 2026 05:22:01 +0000 (0:00:01.181) 0:08:08.310 ******** 2026-03-28 05:22:28.931627 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931638 | orchestrator | 2026-03-28 05:22:28.931649 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:22:28.931660 | orchestrator | Saturday 28 March 2026 05:22:03 +0000 (0:00:01.330) 0:08:09.640 ******** 2026-03-28 05:22:28.931671 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931691 | orchestrator | 2026-03-28 05:22:28.931702 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:22:28.931713 | orchestrator | Saturday 28 March 2026 05:22:04 +0000 (0:00:01.173) 0:08:10.814 ******** 2026-03-28 05:22:28.931723 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.931734 | orchestrator | 2026-03-28 05:22:28.931745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:22:28.931756 | orchestrator | Saturday 28 March 2026 05:22:06 +0000 (0:00:02.551) 0:08:13.365 ******** 2026-03-28 05:22:28.931767 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.931778 | orchestrator | 2026-03-28 05:22:28.931788 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:22:28.931799 | orchestrator | Saturday 28 March 2026 05:22:08 +0000 (0:00:01.196) 0:08:14.562 ******** 2026-03-28 05:22:28.931810 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-28 05:22:28.931821 | orchestrator | 2026-03-28 05:22:28.931832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:22:28.931843 | orchestrator | Saturday 28 March 2026 05:22:09 +0000 (0:00:01.534) 0:08:16.097 ******** 2026-03-28 05:22:28.931853 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931898 | orchestrator | 2026-03-28 05:22:28.931909 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:22:28.931920 | orchestrator | Saturday 28 March 2026 05:22:10 +0000 (0:00:01.227) 0:08:17.324 ******** 2026-03-28 05:22:28.931931 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931942 | orchestrator | 2026-03-28 05:22:28.931952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:22:28.931963 | orchestrator | Saturday 28 March 2026 05:22:12 +0000 (0:00:01.204) 0:08:18.529 ******** 2026-03-28 05:22:28.931974 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.931985 | orchestrator | 2026-03-28 05:22:28.931995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:22:28.932006 | orchestrator | Saturday 28 March 2026 05:22:13 +0000 (0:00:01.252) 0:08:19.781 ******** 2026-03-28 05:22:28.932017 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.932028 | orchestrator | 2026-03-28 05:22:28.932038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:22:28.932049 | orchestrator | Saturday 28 March 2026 05:22:14 +0000 (0:00:01.168) 0:08:20.949 ******** 2026-03-28 05:22:28.932060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.932071 | orchestrator | 2026-03-28 05:22:28.932082 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:22:28.932092 | orchestrator | Saturday 28 March 2026 05:22:15 +0000 (0:00:01.148) 0:08:22.098 ******** 2026-03-28 05:22:28.932103 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.932114 | orchestrator | 2026-03-28 05:22:28.932124 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:22:28.932135 | orchestrator | Saturday 28 March 2026 05:22:16 +0000 (0:00:01.158) 0:08:23.256 ******** 2026-03-28 05:22:28.932146 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.932157 | orchestrator | 2026-03-28 05:22:28.932167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:22:28.932178 | orchestrator | Saturday 28 March 2026 05:22:17 +0000 (0:00:01.163) 0:08:24.420 ******** 2026-03-28 05:22:28.932189 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:22:28.932199 | orchestrator | 2026-03-28 05:22:28.932210 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:22:28.932221 | orchestrator | Saturday 28 March 2026 05:22:19 +0000 (0:00:01.204) 0:08:25.624 ******** 2026-03-28 05:22:28.932237 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:22:28.932248 | orchestrator | 2026-03-28 05:22:28.932259 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:22:28.932270 | orchestrator | Saturday 28 March 2026 05:22:20 +0000 (0:00:01.208) 0:08:26.833 ******** 2026-03-28 05:22:28.932287 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-28 05:22:28.932299 | orchestrator | 2026-03-28 05:22:28.932309 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:22:28.932320 | orchestrator | Saturday 28 March 2026 05:22:21 +0000 (0:00:01.539) 0:08:28.372 ******** 2026-03-28 05:22:28.932331 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 05:22:28.932342 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 05:22:28.932353 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 05:22:28.932363 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 05:22:28.932374 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 05:22:28.932385 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 05:22:28.932395 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 05:22:28.932406 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:22:28.932417 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:22:28.932428 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:22:28.932439 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:22:28.932450 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:22:28.932460 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:22:28.932471 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:22:28.932488 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 05:23:18.235475 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 05:23:18.235622 | orchestrator | 2026-03-28 05:23:18.235649 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:23:18.235668 | orchestrator | Saturday 28 March 2026 05:22:28 +0000 (0:00:06.967) 0:08:35.339 ******** 2026-03-28 05:23:18.235685 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.235704 | orchestrator | 2026-03-28 05:23:18.235777 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:23:18.235797 | orchestrator | Saturday 28 March 2026 05:22:30 +0000 (0:00:01.153) 0:08:36.493 ******** 2026-03-28 05:23:18.235814 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.235830 | orchestrator | 2026-03-28 05:23:18.235847 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:23:18.235863 | orchestrator | Saturday 28 March 2026 05:22:31 +0000 (0:00:01.161) 0:08:37.655 ******** 2026-03-28 05:23:18.235880 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.235895 | orchestrator | 2026-03-28 05:23:18.235912 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:23:18.235929 | orchestrator | Saturday 28 March 2026 05:22:32 +0000 (0:00:01.150) 0:08:38.805 ******** 2026-03-28 05:23:18.235946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.235961 | orchestrator | 2026-03-28 05:23:18.235976 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:23:18.235994 | orchestrator | Saturday 28 March 2026 05:22:33 +0000 (0:00:01.147) 0:08:39.952 ******** 2026-03-28 05:23:18.236011 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236026 | orchestrator | 2026-03-28 05:23:18.236043 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:23:18.236060 | orchestrator | Saturday 28 March 2026 05:22:34 +0000 (0:00:01.121) 0:08:41.074 ******** 2026-03-28 05:23:18.236075 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236089 | orchestrator | 2026-03-28 05:23:18.236105 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:23:18.236125 | orchestrator | Saturday 28 March 2026 05:22:35 +0000 (0:00:01.172) 0:08:42.247 ******** 2026-03-28 05:23:18.236141 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236192 | orchestrator | 2026-03-28 05:23:18.236211 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:23:18.236227 | orchestrator | Saturday 28 March 2026 05:22:36 +0000 (0:00:01.127) 0:08:43.374 ******** 2026-03-28 05:23:18.236243 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236259 | orchestrator | 2026-03-28 05:23:18.236274 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:23:18.236289 | orchestrator | Saturday 28 March 2026 05:22:38 +0000 (0:00:01.312) 0:08:44.687 ******** 2026-03-28 05:23:18.236307 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236324 | orchestrator | 2026-03-28 05:23:18.236340 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:23:18.236356 | orchestrator | Saturday 28 March 2026 05:22:39 +0000 (0:00:01.202) 0:08:45.889 ******** 2026-03-28 05:23:18.236372 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236389 | orchestrator | 2026-03-28 05:23:18.236406 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:23:18.236424 | orchestrator | Saturday 28 March 2026 05:22:40 +0000 (0:00:01.177) 0:08:47.067 ******** 2026-03-28 05:23:18.236440 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236456 | orchestrator | 2026-03-28 05:23:18.236472 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:23:18.236488 | orchestrator | Saturday 28 March 2026 05:22:41 +0000 (0:00:01.242) 0:08:48.309 ******** 2026-03-28 05:23:18.236503 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236519 | orchestrator | 2026-03-28 05:23:18.236534 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:23:18.236571 | orchestrator | Saturday 28 March 2026 05:22:43 +0000 (0:00:01.202) 0:08:49.512 ******** 2026-03-28 05:23:18.236590 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236606 | orchestrator | 2026-03-28 05:23:18.236623 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:23:18.236640 | orchestrator | Saturday 28 March 2026 05:22:44 +0000 (0:00:01.262) 0:08:50.774 ******** 2026-03-28 05:23:18.236657 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236673 | orchestrator | 2026-03-28 05:23:18.236690 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:23:18.236706 | orchestrator | Saturday 28 March 2026 05:22:45 +0000 (0:00:01.221) 0:08:51.996 ******** 2026-03-28 05:23:18.236763 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236781 | orchestrator | 2026-03-28 05:23:18.236797 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:23:18.236815 | orchestrator | Saturday 28 March 2026 05:22:46 +0000 (0:00:01.264) 0:08:53.260 ******** 2026-03-28 05:23:18.236832 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236848 | orchestrator | 2026-03-28 05:23:18.236864 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:23:18.236880 | orchestrator | Saturday 28 March 2026 05:22:48 +0000 (0:00:01.213) 0:08:54.474 ******** 2026-03-28 05:23:18.236896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236913 | orchestrator | 2026-03-28 05:23:18.236930 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:23:18.236947 | orchestrator | Saturday 28 March 2026 05:22:49 +0000 (0:00:01.126) 0:08:55.600 ******** 2026-03-28 05:23:18.236964 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.236981 | orchestrator | 2026-03-28 05:23:18.236991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:23:18.237001 | orchestrator | Saturday 28 March 2026 05:22:50 +0000 (0:00:01.186) 0:08:56.786 ******** 2026-03-28 05:23:18.237011 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237020 | orchestrator | 2026-03-28 05:23:18.237052 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:23:18.237063 | orchestrator | Saturday 28 March 2026 05:22:51 +0000 (0:00:01.165) 0:08:57.951 ******** 2026-03-28 05:23:18.237084 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237094 | orchestrator | 2026-03-28 05:23:18.237104 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:23:18.237113 | orchestrator | Saturday 28 March 2026 05:22:52 +0000 (0:00:01.134) 0:08:59.086 ******** 2026-03-28 05:23:18.237123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237132 | orchestrator | 2026-03-28 05:23:18.237142 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:23:18.237152 | orchestrator | Saturday 28 March 2026 05:22:53 +0000 (0:00:01.335) 0:09:00.421 ******** 2026-03-28 05:23:18.237169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:23:18.237186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:23:18.237202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:23:18.237218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237234 | orchestrator | 2026-03-28 05:23:18.237249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:23:18.237264 | orchestrator | Saturday 28 March 2026 05:22:55 +0000 (0:00:01.551) 0:09:01.972 ******** 2026-03-28 05:23:18.237279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:23:18.237296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:23:18.237313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:23:18.237331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237348 | orchestrator | 2026-03-28 05:23:18.237365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:23:18.237382 | orchestrator | Saturday 28 March 2026 05:22:57 +0000 (0:00:01.461) 0:09:03.434 ******** 2026-03-28 05:23:18.237397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:23:18.237414 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:23:18.237431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:23:18.237449 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237465 | orchestrator | 2026-03-28 05:23:18.237481 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:23:18.237498 | orchestrator | Saturday 28 March 2026 05:22:58 +0000 (0:00:01.435) 0:09:04.869 ******** 2026-03-28 05:23:18.237515 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237532 | orchestrator | 2026-03-28 05:23:18.237548 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:23:18.237564 | orchestrator | Saturday 28 March 2026 05:22:59 +0000 (0:00:01.167) 0:09:06.037 ******** 2026-03-28 05:23:18.237580 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 05:23:18.237598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.237614 | orchestrator | 2026-03-28 05:23:18.237632 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:23:18.237650 | orchestrator | Saturday 28 March 2026 05:23:01 +0000 (0:00:01.445) 0:09:07.483 ******** 2026-03-28 05:23:18.237666 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:23:18.237683 | orchestrator | 2026-03-28 05:23:18.237699 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:23:18.237740 | orchestrator | Saturday 28 March 2026 05:23:02 +0000 (0:00:01.937) 0:09:09.420 ******** 2026-03-28 05:23:18.237758 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:23:18.237775 | orchestrator | 2026-03-28 05:23:18.237792 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 05:23:18.237808 | orchestrator | Saturday 28 March 2026 05:23:04 +0000 (0:00:01.167) 0:09:10.589 ******** 2026-03-28 05:23:18.237824 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-28 05:23:18.237842 | orchestrator | 2026-03-28 05:23:18.237870 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 05:23:18.237900 | orchestrator | Saturday 28 March 2026 05:23:05 +0000 (0:00:01.529) 0:09:12.119 ******** 2026-03-28 05:23:18.237917 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-28 05:23:18.237933 | orchestrator | 2026-03-28 05:23:18.237950 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 05:23:18.237967 | orchestrator | Saturday 28 March 2026 05:23:09 +0000 (0:00:03.750) 0:09:15.870 ******** 2026-03-28 05:23:18.237982 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:23:18.238000 | orchestrator | 2026-03-28 05:23:18.238081 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 05:23:18.238103 | orchestrator | Saturday 28 March 2026 05:23:10 +0000 (0:00:01.257) 0:09:17.127 ******** 2026-03-28 05:23:18.238120 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:23:18.238137 | orchestrator | 2026-03-28 05:23:18.238154 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 05:23:18.238171 | orchestrator | Saturday 28 March 2026 05:23:11 +0000 (0:00:01.157) 0:09:18.285 ******** 2026-03-28 05:23:18.238187 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:23:18.238204 | orchestrator | 2026-03-28 05:23:18.238220 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 05:23:18.238236 | orchestrator | Saturday 28 March 2026 05:23:13 +0000 (0:00:01.244) 0:09:19.530 ******** 2026-03-28 05:23:18.238253 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:23:18.238271 | orchestrator | 2026-03-28 05:23:18.238286 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 05:23:18.238304 | orchestrator | Saturday 28 March 2026 05:23:15 +0000 (0:00:02.052) 0:09:21.583 ******** 2026-03-28 05:23:18.238320 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:23:18.238336 | orchestrator | 2026-03-28 05:23:18.238353 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 05:23:18.238370 | orchestrator | Saturday 28 March 2026 05:23:16 +0000 (0:00:01.592) 0:09:23.176 ******** 2026-03-28 05:23:18.238387 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:23:18.238404 | orchestrator | 2026-03-28 05:23:18.238434 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 05:24:16.683619 | orchestrator | Saturday 28 March 2026 05:23:18 +0000 (0:00:01.479) 0:09:24.655 ******** 2026-03-28 05:24:16.683743 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.683761 | orchestrator | 2026-03-28 05:24:16.683773 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 05:24:16.683785 | orchestrator | Saturday 28 March 2026 05:23:19 +0000 (0:00:01.525) 0:09:26.181 ******** 2026-03-28 05:24:16.683796 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.683808 | orchestrator | 2026-03-28 05:24:16.683819 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 05:24:16.683830 | orchestrator | Saturday 28 March 2026 05:23:21 +0000 (0:00:01.820) 0:09:28.002 ******** 2026-03-28 05:24:16.683841 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.683852 | orchestrator | 2026-03-28 05:24:16.683863 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 05:24:16.683874 | orchestrator | Saturday 28 March 2026 05:23:23 +0000 (0:00:01.674) 0:09:29.676 ******** 2026-03-28 05:24:16.683885 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 05:24:16.683897 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 05:24:16.683908 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 05:24:16.683919 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-28 05:24:16.683930 | orchestrator | 2026-03-28 05:24:16.683941 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 05:24:16.683951 | orchestrator | Saturday 28 March 2026 05:23:27 +0000 (0:00:03.913) 0:09:33.589 ******** 2026-03-28 05:24:16.683962 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:24:16.683973 | orchestrator | 2026-03-28 05:24:16.683984 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 05:24:16.684019 | orchestrator | Saturday 28 March 2026 05:23:29 +0000 (0:00:02.108) 0:09:35.697 ******** 2026-03-28 05:24:16.684031 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684042 | orchestrator | 2026-03-28 05:24:16.684053 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 05:24:16.684063 | orchestrator | Saturday 28 March 2026 05:23:30 +0000 (0:00:01.174) 0:09:36.872 ******** 2026-03-28 05:24:16.684074 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684085 | orchestrator | 2026-03-28 05:24:16.684096 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 05:24:16.684108 | orchestrator | Saturday 28 March 2026 05:23:31 +0000 (0:00:01.192) 0:09:38.065 ******** 2026-03-28 05:24:16.684121 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684133 | orchestrator | 2026-03-28 05:24:16.684147 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 05:24:16.684160 | orchestrator | Saturday 28 March 2026 05:23:33 +0000 (0:00:02.180) 0:09:40.245 ******** 2026-03-28 05:24:16.684172 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684185 | orchestrator | 2026-03-28 05:24:16.684198 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 05:24:16.684210 | orchestrator | Saturday 28 March 2026 05:23:35 +0000 (0:00:01.483) 0:09:41.729 ******** 2026-03-28 05:24:16.684222 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:16.684235 | orchestrator | 2026-03-28 05:24:16.684247 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 05:24:16.684259 | orchestrator | Saturday 28 March 2026 05:23:36 +0000 (0:00:01.157) 0:09:42.887 ******** 2026-03-28 05:24:16.684272 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-28 05:24:16.684285 | orchestrator | 2026-03-28 05:24:16.684298 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 05:24:16.684310 | orchestrator | Saturday 28 March 2026 05:23:37 +0000 (0:00:01.532) 0:09:44.419 ******** 2026-03-28 05:24:16.684323 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:16.684336 | orchestrator | 2026-03-28 05:24:16.684348 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 05:24:16.684376 | orchestrator | Saturday 28 March 2026 05:23:39 +0000 (0:00:01.109) 0:09:45.528 ******** 2026-03-28 05:24:16.684390 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:16.684402 | orchestrator | 2026-03-28 05:24:16.684415 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 05:24:16.684427 | orchestrator | Saturday 28 March 2026 05:23:40 +0000 (0:00:01.101) 0:09:46.629 ******** 2026-03-28 05:24:16.684439 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-28 05:24:16.684452 | orchestrator | 2026-03-28 05:24:16.684465 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 05:24:16.684475 | orchestrator | Saturday 28 March 2026 05:23:41 +0000 (0:00:01.511) 0:09:48.141 ******** 2026-03-28 05:24:16.684486 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684497 | orchestrator | 2026-03-28 05:24:16.684508 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 05:24:16.684519 | orchestrator | Saturday 28 March 2026 05:23:44 +0000 (0:00:02.348) 0:09:50.489 ******** 2026-03-28 05:24:16.684529 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684540 | orchestrator | 2026-03-28 05:24:16.684576 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 05:24:16.684590 | orchestrator | Saturday 28 March 2026 05:23:46 +0000 (0:00:02.011) 0:09:52.501 ******** 2026-03-28 05:24:16.684600 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684611 | orchestrator | 2026-03-28 05:24:16.684622 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 05:24:16.684633 | orchestrator | Saturday 28 March 2026 05:23:48 +0000 (0:00:02.501) 0:09:55.002 ******** 2026-03-28 05:24:16.684644 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:24:16.684655 | orchestrator | 2026-03-28 05:24:16.684666 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 05:24:16.684685 | orchestrator | Saturday 28 March 2026 05:23:52 +0000 (0:00:03.549) 0:09:58.552 ******** 2026-03-28 05:24:16.684696 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-28 05:24:16.684707 | orchestrator | 2026-03-28 05:24:16.684735 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 05:24:16.684747 | orchestrator | Saturday 28 March 2026 05:23:53 +0000 (0:00:01.660) 0:10:00.212 ******** 2026-03-28 05:24:16.684758 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684769 | orchestrator | 2026-03-28 05:24:16.684780 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 05:24:16.684791 | orchestrator | Saturday 28 March 2026 05:23:55 +0000 (0:00:02.217) 0:10:02.430 ******** 2026-03-28 05:24:16.684802 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:16.684813 | orchestrator | 2026-03-28 05:24:16.684823 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 05:24:16.684834 | orchestrator | Saturday 28 March 2026 05:23:58 +0000 (0:00:02.940) 0:10:05.371 ******** 2026-03-28 05:24:16.684845 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:16.684856 | orchestrator | 2026-03-28 05:24:16.684867 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 05:24:16.684878 | orchestrator | Saturday 28 March 2026 05:24:00 +0000 (0:00:01.216) 0:10:06.588 ******** 2026-03-28 05:24:16.684891 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:24:16.684906 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:24:16.684917 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 05:24:16.684928 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 05:24:16.684941 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 05:24:16.684959 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}])  2026-03-28 05:24:16.684972 | orchestrator | 2026-03-28 05:24:16.684984 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-28 05:24:16.685001 | orchestrator | Saturday 28 March 2026 05:24:10 +0000 (0:00:09.931) 0:10:16.519 ******** 2026-03-28 05:24:16.685012 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:24:16.685024 | orchestrator | 2026-03-28 05:24:16.685034 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:24:16.685045 | orchestrator | Saturday 28 March 2026 05:24:12 +0000 (0:00:02.630) 0:10:19.150 ******** 2026-03-28 05:24:16.685056 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:24:16.685067 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 05:24:16.685078 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 05:24:16.685089 | orchestrator | 2026-03-28 05:24:16.685100 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:24:16.685111 | orchestrator | Saturday 28 March 2026 05:24:15 +0000 (0:00:02.531) 0:10:21.682 ******** 2026-03-28 05:24:16.685122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:24:16.685133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:24:16.685144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:24:16.685155 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:16.685166 | orchestrator | 2026-03-28 05:24:16.685177 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-28 05:24:16.685193 | orchestrator | Saturday 28 March 2026 05:24:16 +0000 (0:00:01.415) 0:10:23.098 ******** 2026-03-28 05:24:55.289162 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289309 | orchestrator | 2026-03-28 05:24:55.289367 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-28 05:24:55.289391 | orchestrator | Saturday 28 March 2026 05:24:17 +0000 (0:00:01.194) 0:10:24.292 ******** 2026-03-28 05:24:55.289412 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:24:55.289432 | orchestrator | 2026-03-28 05:24:55.289450 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 05:24:55.289544 | orchestrator | Saturday 28 March 2026 05:24:20 +0000 (0:00:02.422) 0:10:26.714 ******** 2026-03-28 05:24:55.289564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289583 | orchestrator | 2026-03-28 05:24:55.289602 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 05:24:55.289620 | orchestrator | Saturday 28 March 2026 05:24:21 +0000 (0:00:01.188) 0:10:27.904 ******** 2026-03-28 05:24:55.289640 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289658 | orchestrator | 2026-03-28 05:24:55.289679 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 05:24:55.289691 | orchestrator | Saturday 28 March 2026 05:24:22 +0000 (0:00:01.157) 0:10:29.062 ******** 2026-03-28 05:24:55.289702 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289713 | orchestrator | 2026-03-28 05:24:55.289724 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 05:24:55.289735 | orchestrator | Saturday 28 March 2026 05:24:23 +0000 (0:00:01.152) 0:10:30.215 ******** 2026-03-28 05:24:55.289746 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289757 | orchestrator | 2026-03-28 05:24:55.289768 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 05:24:55.289779 | orchestrator | Saturday 28 March 2026 05:24:24 +0000 (0:00:01.142) 0:10:31.358 ******** 2026-03-28 05:24:55.289790 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289801 | orchestrator | 2026-03-28 05:24:55.289812 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 05:24:55.289823 | orchestrator | Saturday 28 March 2026 05:24:26 +0000 (0:00:01.209) 0:10:32.567 ******** 2026-03-28 05:24:55.289833 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289844 | orchestrator | 2026-03-28 05:24:55.289855 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 05:24:55.289866 | orchestrator | Saturday 28 March 2026 05:24:27 +0000 (0:00:01.189) 0:10:33.757 ******** 2026-03-28 05:24:55.289906 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:24:55.289918 | orchestrator | 2026-03-28 05:24:55.289929 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-28 05:24:55.289940 | orchestrator | 2026-03-28 05:24:55.289951 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-28 05:24:55.289962 | orchestrator | Saturday 28 March 2026 05:24:28 +0000 (0:00:00.965) 0:10:34.722 ******** 2026-03-28 05:24:55.289973 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.289983 | orchestrator | 2026-03-28 05:24:55.289994 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-28 05:24:55.290005 | orchestrator | Saturday 28 March 2026 05:24:29 +0000 (0:00:01.180) 0:10:35.903 ******** 2026-03-28 05:24:55.290071 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290083 | orchestrator | 2026-03-28 05:24:55.290094 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-28 05:24:55.290105 | orchestrator | Saturday 28 March 2026 05:24:30 +0000 (0:00:00.807) 0:10:36.711 ******** 2026-03-28 05:24:55.290116 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:24:55.290127 | orchestrator | 2026-03-28 05:24:55.290138 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-28 05:24:55.290149 | orchestrator | Saturday 28 March 2026 05:24:31 +0000 (0:00:00.786) 0:10:37.497 ******** 2026-03-28 05:24:55.290160 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290171 | orchestrator | 2026-03-28 05:24:55.290182 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:24:55.290208 | orchestrator | Saturday 28 March 2026 05:24:31 +0000 (0:00:00.788) 0:10:38.286 ******** 2026-03-28 05:24:55.290220 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-28 05:24:55.290231 | orchestrator | 2026-03-28 05:24:55.290242 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:24:55.290252 | orchestrator | Saturday 28 March 2026 05:24:33 +0000 (0:00:01.359) 0:10:39.645 ******** 2026-03-28 05:24:55.290263 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290274 | orchestrator | 2026-03-28 05:24:55.290285 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:24:55.290296 | orchestrator | Saturday 28 March 2026 05:24:34 +0000 (0:00:01.458) 0:10:41.104 ******** 2026-03-28 05:24:55.290307 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290318 | orchestrator | 2026-03-28 05:24:55.290329 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:24:55.290340 | orchestrator | Saturday 28 March 2026 05:24:35 +0000 (0:00:01.188) 0:10:42.292 ******** 2026-03-28 05:24:55.290351 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290361 | orchestrator | 2026-03-28 05:24:55.290372 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:24:55.290383 | orchestrator | Saturday 28 March 2026 05:24:37 +0000 (0:00:01.481) 0:10:43.774 ******** 2026-03-28 05:24:55.290394 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290405 | orchestrator | 2026-03-28 05:24:55.290416 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:24:55.290427 | orchestrator | Saturday 28 March 2026 05:24:38 +0000 (0:00:01.158) 0:10:44.933 ******** 2026-03-28 05:24:55.290438 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290448 | orchestrator | 2026-03-28 05:24:55.290483 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:24:55.290495 | orchestrator | Saturday 28 March 2026 05:24:39 +0000 (0:00:01.134) 0:10:46.068 ******** 2026-03-28 05:24:55.290506 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290517 | orchestrator | 2026-03-28 05:24:55.290528 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:24:55.290539 | orchestrator | Saturday 28 March 2026 05:24:40 +0000 (0:00:01.226) 0:10:47.294 ******** 2026-03-28 05:24:55.290572 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:24:55.290583 | orchestrator | 2026-03-28 05:24:55.290594 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:24:55.290615 | orchestrator | Saturday 28 March 2026 05:24:42 +0000 (0:00:01.222) 0:10:48.516 ******** 2026-03-28 05:24:55.290626 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290637 | orchestrator | 2026-03-28 05:24:55.290648 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:24:55.290659 | orchestrator | Saturday 28 March 2026 05:24:43 +0000 (0:00:01.175) 0:10:49.691 ******** 2026-03-28 05:24:55.290670 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:24:55.290682 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:24:55.290693 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:24:55.290703 | orchestrator | 2026-03-28 05:24:55.290714 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:24:55.290725 | orchestrator | Saturday 28 March 2026 05:24:45 +0000 (0:00:02.026) 0:10:51.718 ******** 2026-03-28 05:24:55.290736 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:24:55.290747 | orchestrator | 2026-03-28 05:24:55.290757 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:24:55.290768 | orchestrator | Saturday 28 March 2026 05:24:46 +0000 (0:00:01.309) 0:10:53.028 ******** 2026-03-28 05:24:55.290779 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:24:55.290790 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:24:55.290801 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:24:55.290812 | orchestrator | 2026-03-28 05:24:55.290823 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:24:55.290833 | orchestrator | Saturday 28 March 2026 05:24:49 +0000 (0:00:03.283) 0:10:56.311 ******** 2026-03-28 05:24:55.290844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:24:55.290856 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:24:55.290867 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:24:55.290877 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:24:55.290888 | orchestrator | 2026-03-28 05:24:55.290899 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:24:55.290910 | orchestrator | Saturday 28 March 2026 05:24:51 +0000 (0:00:01.919) 0:10:58.231 ******** 2026-03-28 05:24:55.290923 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.290938 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.290949 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.290960 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:24:55.290971 | orchestrator | 2026-03-28 05:24:55.290982 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:24:55.290999 | orchestrator | Saturday 28 March 2026 05:24:54 +0000 (0:00:02.252) 0:11:00.483 ******** 2026-03-28 05:24:55.291012 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.291033 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.291045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:24:55.291056 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:24:55.291067 | orchestrator | 2026-03-28 05:24:55.291085 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:25:15.416770 | orchestrator | Saturday 28 March 2026 05:24:55 +0000 (0:00:01.223) 0:11:01.707 ******** 2026-03-28 05:25:15.416890 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:24:47.121110', 'end': '2026-03-28 05:24:47.177867', 'delta': '0:00:00.056757', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:25:15.416912 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '63c01d28d51e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:24:48.062223', 'end': '2026-03-28 05:24:48.103018', 'delta': '0:00:00.040795', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63c01d28d51e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:25:15.416925 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '99ef085e2de2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:24:48.678790', 'end': '2026-03-28 05:24:48.723554', 'delta': '0:00:00.044764', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['99ef085e2de2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:25:15.416936 | orchestrator | 2026-03-28 05:25:15.416949 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:25:15.416960 | orchestrator | Saturday 28 March 2026 05:24:56 +0000 (0:00:01.218) 0:11:02.925 ******** 2026-03-28 05:25:15.416993 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:15.417007 | orchestrator | 2026-03-28 05:25:15.417018 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:25:15.417045 | orchestrator | Saturday 28 March 2026 05:24:57 +0000 (0:00:01.291) 0:11:04.217 ******** 2026-03-28 05:25:15.417078 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417103 | orchestrator | 2026-03-28 05:25:15.417114 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:25:15.417125 | orchestrator | Saturday 28 March 2026 05:24:59 +0000 (0:00:01.268) 0:11:05.486 ******** 2026-03-28 05:25:15.417136 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:15.417147 | orchestrator | 2026-03-28 05:25:15.417158 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:25:15.417169 | orchestrator | Saturday 28 March 2026 05:25:00 +0000 (0:00:01.165) 0:11:06.652 ******** 2026-03-28 05:25:15.417180 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:25:15.417191 | orchestrator | 2026-03-28 05:25:15.417202 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:25:15.417213 | orchestrator | Saturday 28 March 2026 05:25:02 +0000 (0:00:02.050) 0:11:08.702 ******** 2026-03-28 05:25:15.417224 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:15.417234 | orchestrator | 2026-03-28 05:25:15.417245 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:25:15.417256 | orchestrator | Saturday 28 March 2026 05:25:03 +0000 (0:00:01.191) 0:11:09.894 ******** 2026-03-28 05:25:15.417267 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417278 | orchestrator | 2026-03-28 05:25:15.417289 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:25:15.417300 | orchestrator | Saturday 28 March 2026 05:25:04 +0000 (0:00:01.143) 0:11:11.037 ******** 2026-03-28 05:25:15.417310 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417323 | orchestrator | 2026-03-28 05:25:15.417336 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:25:15.417349 | orchestrator | Saturday 28 March 2026 05:25:05 +0000 (0:00:01.249) 0:11:12.287 ******** 2026-03-28 05:25:15.417368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417388 | orchestrator | 2026-03-28 05:25:15.417436 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:25:15.417468 | orchestrator | Saturday 28 March 2026 05:25:06 +0000 (0:00:01.120) 0:11:13.408 ******** 2026-03-28 05:25:15.417481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417493 | orchestrator | 2026-03-28 05:25:15.417505 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:25:15.417518 | orchestrator | Saturday 28 March 2026 05:25:08 +0000 (0:00:01.148) 0:11:14.556 ******** 2026-03-28 05:25:15.417531 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417543 | orchestrator | 2026-03-28 05:25:15.417556 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:25:15.417568 | orchestrator | Saturday 28 March 2026 05:25:09 +0000 (0:00:01.157) 0:11:15.714 ******** 2026-03-28 05:25:15.417581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417593 | orchestrator | 2026-03-28 05:25:15.417606 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:25:15.417618 | orchestrator | Saturday 28 March 2026 05:25:10 +0000 (0:00:01.207) 0:11:16.922 ******** 2026-03-28 05:25:15.417630 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417642 | orchestrator | 2026-03-28 05:25:15.417655 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:25:15.417668 | orchestrator | Saturday 28 March 2026 05:25:11 +0000 (0:00:01.203) 0:11:18.125 ******** 2026-03-28 05:25:15.417681 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417692 | orchestrator | 2026-03-28 05:25:15.417703 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:25:15.417714 | orchestrator | Saturday 28 March 2026 05:25:12 +0000 (0:00:01.197) 0:11:19.323 ******** 2026-03-28 05:25:15.417725 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:15.417736 | orchestrator | 2026-03-28 05:25:15.417747 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:25:15.417767 | orchestrator | Saturday 28 March 2026 05:25:14 +0000 (0:00:01.211) 0:11:20.534 ******** 2026-03-28 05:25:15.417779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:15.417794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:15.417805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:15.417818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:25:15.417831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:15.417842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:15.417861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:16.729081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:25:16.729217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:16.729237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:25:16.729246 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:16.729255 | orchestrator | 2026-03-28 05:25:16.729264 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:25:16.729272 | orchestrator | Saturday 28 March 2026 05:25:15 +0000 (0:00:01.296) 0:11:21.832 ******** 2026-03-28 05:25:16.729282 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729316 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729339 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729350 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729358 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:16.729374 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:48.619115 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:48.619267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:25:48.619285 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619301 | orchestrator | 2026-03-28 05:25:48.619315 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:25:48.619329 | orchestrator | Saturday 28 March 2026 05:25:16 +0000 (0:00:01.315) 0:11:23.147 ******** 2026-03-28 05:25:48.619385 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:48.619399 | orchestrator | 2026-03-28 05:25:48.619410 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:25:48.619421 | orchestrator | Saturday 28 March 2026 05:25:18 +0000 (0:00:01.590) 0:11:24.738 ******** 2026-03-28 05:25:48.619432 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:48.619443 | orchestrator | 2026-03-28 05:25:48.619455 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:25:48.619466 | orchestrator | Saturday 28 March 2026 05:25:19 +0000 (0:00:01.185) 0:11:25.923 ******** 2026-03-28 05:25:48.619477 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:25:48.619488 | orchestrator | 2026-03-28 05:25:48.619499 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:25:48.619510 | orchestrator | Saturday 28 March 2026 05:25:21 +0000 (0:00:01.588) 0:11:27.512 ******** 2026-03-28 05:25:48.619549 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619561 | orchestrator | 2026-03-28 05:25:48.619572 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:25:48.619585 | orchestrator | Saturday 28 March 2026 05:25:22 +0000 (0:00:01.146) 0:11:28.659 ******** 2026-03-28 05:25:48.619597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619610 | orchestrator | 2026-03-28 05:25:48.619623 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:25:48.619636 | orchestrator | Saturday 28 March 2026 05:25:23 +0000 (0:00:01.236) 0:11:29.896 ******** 2026-03-28 05:25:48.619648 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619661 | orchestrator | 2026-03-28 05:25:48.619674 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:25:48.619687 | orchestrator | Saturday 28 March 2026 05:25:24 +0000 (0:00:01.153) 0:11:31.049 ******** 2026-03-28 05:25:48.619700 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 05:25:48.619713 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:25:48.619725 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 05:25:48.619738 | orchestrator | 2026-03-28 05:25:48.619750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:25:48.619763 | orchestrator | Saturday 28 March 2026 05:25:26 +0000 (0:00:02.189) 0:11:33.239 ******** 2026-03-28 05:25:48.619776 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:25:48.619790 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:25:48.619802 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:25:48.619814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619825 | orchestrator | 2026-03-28 05:25:48.619836 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:25:48.619848 | orchestrator | Saturday 28 March 2026 05:25:28 +0000 (0:00:01.324) 0:11:34.563 ******** 2026-03-28 05:25:48.619859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.619870 | orchestrator | 2026-03-28 05:25:48.619881 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:25:48.619892 | orchestrator | Saturday 28 March 2026 05:25:29 +0000 (0:00:01.191) 0:11:35.755 ******** 2026-03-28 05:25:48.619903 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:25:48.619915 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:25:48.619926 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:25:48.619937 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:25:48.619948 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:25:48.619959 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:25:48.619990 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:25:48.620002 | orchestrator | 2026-03-28 05:25:48.620013 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:25:48.620023 | orchestrator | Saturday 28 March 2026 05:25:31 +0000 (0:00:01.990) 0:11:37.745 ******** 2026-03-28 05:25:48.620034 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:25:48.620045 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:25:48.620055 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:25:48.620066 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:25:48.620084 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:25:48.620095 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:25:48.620115 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:25:48.620126 | orchestrator | 2026-03-28 05:25:48.620136 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-28 05:25:48.620147 | orchestrator | Saturday 28 March 2026 05:25:33 +0000 (0:00:02.293) 0:11:40.038 ******** 2026-03-28 05:25:48.620158 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620168 | orchestrator | 2026-03-28 05:25:48.620179 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-28 05:25:48.620190 | orchestrator | Saturday 28 March 2026 05:25:34 +0000 (0:00:00.922) 0:11:40.961 ******** 2026-03-28 05:25:48.620200 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620211 | orchestrator | 2026-03-28 05:25:48.620222 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-28 05:25:48.620233 | orchestrator | Saturday 28 March 2026 05:25:35 +0000 (0:00:00.924) 0:11:41.886 ******** 2026-03-28 05:25:48.620244 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620255 | orchestrator | 2026-03-28 05:25:48.620266 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-28 05:25:48.620276 | orchestrator | Saturday 28 March 2026 05:25:36 +0000 (0:00:00.802) 0:11:42.688 ******** 2026-03-28 05:25:48.620287 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620298 | orchestrator | 2026-03-28 05:25:48.620308 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-28 05:25:48.620319 | orchestrator | Saturday 28 March 2026 05:25:37 +0000 (0:00:00.933) 0:11:43.622 ******** 2026-03-28 05:25:48.620348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620360 | orchestrator | 2026-03-28 05:25:48.620371 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-28 05:25:48.620382 | orchestrator | Saturday 28 March 2026 05:25:37 +0000 (0:00:00.801) 0:11:44.424 ******** 2026-03-28 05:25:48.620392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:25:48.620403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:25:48.620414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:25:48.620425 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620436 | orchestrator | 2026-03-28 05:25:48.620447 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-28 05:25:48.620458 | orchestrator | Saturday 28 March 2026 05:25:39 +0000 (0:00:01.159) 0:11:45.584 ******** 2026-03-28 05:25:48.620468 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-28 05:25:48.620479 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-28 05:25:48.620490 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-28 05:25:48.620501 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-28 05:25:48.620511 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-28 05:25:48.620522 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-28 05:25:48.620533 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:25:48.620544 | orchestrator | 2026-03-28 05:25:48.620555 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-28 05:25:48.620566 | orchestrator | Saturday 28 March 2026 05:25:40 +0000 (0:00:01.782) 0:11:47.366 ******** 2026-03-28 05:25:48.620576 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:25:48.620587 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:25:48.620598 | orchestrator | 2026-03-28 05:25:48.620609 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-28 05:25:48.620620 | orchestrator | Saturday 28 March 2026 05:25:44 +0000 (0:00:03.160) 0:11:50.527 ******** 2026-03-28 05:25:48.620631 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:25:48.620649 | orchestrator | 2026-03-28 05:25:48.620660 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:25:48.620671 | orchestrator | Saturday 28 March 2026 05:25:46 +0000 (0:00:02.198) 0:11:52.725 ******** 2026-03-28 05:25:48.620682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-28 05:25:48.620693 | orchestrator | 2026-03-28 05:25:48.620704 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:25:48.620715 | orchestrator | Saturday 28 March 2026 05:25:47 +0000 (0:00:01.159) 0:11:53.885 ******** 2026-03-28 05:25:48.620726 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-28 05:25:48.620736 | orchestrator | 2026-03-28 05:25:48.620747 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:25:48.620765 | orchestrator | Saturday 28 March 2026 05:25:48 +0000 (0:00:01.147) 0:11:55.033 ******** 2026-03-28 05:26:32.344930 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345074 | orchestrator | 2026-03-28 05:26:32.345094 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:26:32.345107 | orchestrator | Saturday 28 March 2026 05:25:50 +0000 (0:00:01.553) 0:11:56.586 ******** 2026-03-28 05:26:32.345118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345129 | orchestrator | 2026-03-28 05:26:32.345140 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:26:32.345151 | orchestrator | Saturday 28 March 2026 05:25:51 +0000 (0:00:01.189) 0:11:57.776 ******** 2026-03-28 05:26:32.345161 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345170 | orchestrator | 2026-03-28 05:26:32.345180 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:26:32.345208 | orchestrator | Saturday 28 March 2026 05:25:52 +0000 (0:00:01.123) 0:11:58.899 ******** 2026-03-28 05:26:32.345218 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345228 | orchestrator | 2026-03-28 05:26:32.345237 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:26:32.345312 | orchestrator | Saturday 28 March 2026 05:25:53 +0000 (0:00:01.178) 0:12:00.078 ******** 2026-03-28 05:26:32.345323 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345333 | orchestrator | 2026-03-28 05:26:32.345343 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:26:32.345353 | orchestrator | Saturday 28 March 2026 05:25:55 +0000 (0:00:01.608) 0:12:01.686 ******** 2026-03-28 05:26:32.345363 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345372 | orchestrator | 2026-03-28 05:26:32.345382 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:26:32.345392 | orchestrator | Saturday 28 March 2026 05:25:56 +0000 (0:00:01.108) 0:12:02.795 ******** 2026-03-28 05:26:32.345402 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345411 | orchestrator | 2026-03-28 05:26:32.345421 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:26:32.345431 | orchestrator | Saturday 28 March 2026 05:25:57 +0000 (0:00:01.184) 0:12:03.980 ******** 2026-03-28 05:26:32.345441 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345452 | orchestrator | 2026-03-28 05:26:32.345463 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:26:32.345474 | orchestrator | Saturday 28 March 2026 05:25:59 +0000 (0:00:01.555) 0:12:05.535 ******** 2026-03-28 05:26:32.345485 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345496 | orchestrator | 2026-03-28 05:26:32.345507 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:26:32.345518 | orchestrator | Saturday 28 March 2026 05:26:00 +0000 (0:00:01.675) 0:12:07.211 ******** 2026-03-28 05:26:32.345531 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345543 | orchestrator | 2026-03-28 05:26:32.345554 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:26:32.345565 | orchestrator | Saturday 28 March 2026 05:26:01 +0000 (0:00:00.799) 0:12:08.011 ******** 2026-03-28 05:26:32.345600 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345611 | orchestrator | 2026-03-28 05:26:32.345628 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:26:32.345645 | orchestrator | Saturday 28 March 2026 05:26:02 +0000 (0:00:00.865) 0:12:08.876 ******** 2026-03-28 05:26:32.345661 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345677 | orchestrator | 2026-03-28 05:26:32.345692 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:26:32.345709 | orchestrator | Saturday 28 March 2026 05:26:03 +0000 (0:00:00.855) 0:12:09.731 ******** 2026-03-28 05:26:32.345725 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345743 | orchestrator | 2026-03-28 05:26:32.345762 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:26:32.345779 | orchestrator | Saturday 28 March 2026 05:26:04 +0000 (0:00:00.829) 0:12:10.561 ******** 2026-03-28 05:26:32.345793 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345805 | orchestrator | 2026-03-28 05:26:32.345815 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:26:32.345825 | orchestrator | Saturday 28 March 2026 05:26:04 +0000 (0:00:00.798) 0:12:11.359 ******** 2026-03-28 05:26:32.345834 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345844 | orchestrator | 2026-03-28 05:26:32.345854 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:26:32.345863 | orchestrator | Saturday 28 March 2026 05:26:05 +0000 (0:00:00.808) 0:12:12.168 ******** 2026-03-28 05:26:32.345873 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.345882 | orchestrator | 2026-03-28 05:26:32.345892 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:26:32.345902 | orchestrator | Saturday 28 March 2026 05:26:06 +0000 (0:00:00.791) 0:12:12.959 ******** 2026-03-28 05:26:32.345911 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345921 | orchestrator | 2026-03-28 05:26:32.345931 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:26:32.345940 | orchestrator | Saturday 28 March 2026 05:26:07 +0000 (0:00:00.850) 0:12:13.809 ******** 2026-03-28 05:26:32.345950 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345960 | orchestrator | 2026-03-28 05:26:32.345969 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:26:32.345979 | orchestrator | Saturday 28 March 2026 05:26:08 +0000 (0:00:00.895) 0:12:14.705 ******** 2026-03-28 05:26:32.345989 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.345999 | orchestrator | 2026-03-28 05:26:32.346008 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:26:32.346074 | orchestrator | Saturday 28 March 2026 05:26:09 +0000 (0:00:00.812) 0:12:15.518 ******** 2026-03-28 05:26:32.346084 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346094 | orchestrator | 2026-03-28 05:26:32.346104 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:26:32.346114 | orchestrator | Saturday 28 March 2026 05:26:09 +0000 (0:00:00.781) 0:12:16.299 ******** 2026-03-28 05:26:32.346123 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346133 | orchestrator | 2026-03-28 05:26:32.346143 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:26:32.346173 | orchestrator | Saturday 28 March 2026 05:26:10 +0000 (0:00:00.793) 0:12:17.093 ******** 2026-03-28 05:26:32.346184 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346193 | orchestrator | 2026-03-28 05:26:32.346203 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:26:32.346213 | orchestrator | Saturday 28 March 2026 05:26:11 +0000 (0:00:00.898) 0:12:17.992 ******** 2026-03-28 05:26:32.346222 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346232 | orchestrator | 2026-03-28 05:26:32.346267 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:26:32.346278 | orchestrator | Saturday 28 March 2026 05:26:12 +0000 (0:00:00.793) 0:12:18.785 ******** 2026-03-28 05:26:32.346299 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346309 | orchestrator | 2026-03-28 05:26:32.346326 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:26:32.346336 | orchestrator | Saturday 28 March 2026 05:26:13 +0000 (0:00:00.800) 0:12:19.586 ******** 2026-03-28 05:26:32.346346 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346355 | orchestrator | 2026-03-28 05:26:32.346365 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:26:32.346375 | orchestrator | Saturday 28 March 2026 05:26:13 +0000 (0:00:00.796) 0:12:20.383 ******** 2026-03-28 05:26:32.346387 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346403 | orchestrator | 2026-03-28 05:26:32.346420 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:26:32.346436 | orchestrator | Saturday 28 March 2026 05:26:14 +0000 (0:00:00.846) 0:12:21.230 ******** 2026-03-28 05:26:32.346452 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346482 | orchestrator | 2026-03-28 05:26:32.346502 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:26:32.346518 | orchestrator | Saturday 28 March 2026 05:26:15 +0000 (0:00:00.770) 0:12:22.001 ******** 2026-03-28 05:26:32.346531 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346541 | orchestrator | 2026-03-28 05:26:32.346550 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:26:32.346560 | orchestrator | Saturday 28 March 2026 05:26:16 +0000 (0:00:00.760) 0:12:22.761 ******** 2026-03-28 05:26:32.346570 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346579 | orchestrator | 2026-03-28 05:26:32.346597 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:26:32.346613 | orchestrator | Saturday 28 March 2026 05:26:17 +0000 (0:00:00.828) 0:12:23.589 ******** 2026-03-28 05:26:32.346629 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346644 | orchestrator | 2026-03-28 05:26:32.346660 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:26:32.346677 | orchestrator | Saturday 28 March 2026 05:26:17 +0000 (0:00:00.801) 0:12:24.391 ******** 2026-03-28 05:26:32.346693 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346708 | orchestrator | 2026-03-28 05:26:32.346724 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:26:32.346739 | orchestrator | Saturday 28 March 2026 05:26:18 +0000 (0:00:00.779) 0:12:25.170 ******** 2026-03-28 05:26:32.346754 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.346771 | orchestrator | 2026-03-28 05:26:32.346787 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:26:32.346804 | orchestrator | Saturday 28 March 2026 05:26:20 +0000 (0:00:01.619) 0:12:26.790 ******** 2026-03-28 05:26:32.346820 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.346836 | orchestrator | 2026-03-28 05:26:32.346851 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:26:32.346867 | orchestrator | Saturday 28 March 2026 05:26:22 +0000 (0:00:02.107) 0:12:28.898 ******** 2026-03-28 05:26:32.346884 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-28 05:26:32.346902 | orchestrator | 2026-03-28 05:26:32.346917 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:26:32.346933 | orchestrator | Saturday 28 March 2026 05:26:23 +0000 (0:00:01.362) 0:12:30.261 ******** 2026-03-28 05:26:32.346950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.346967 | orchestrator | 2026-03-28 05:26:32.346982 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:26:32.347000 | orchestrator | Saturday 28 March 2026 05:26:25 +0000 (0:00:01.214) 0:12:31.476 ******** 2026-03-28 05:26:32.347017 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.347034 | orchestrator | 2026-03-28 05:26:32.347048 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:26:32.347079 | orchestrator | Saturday 28 March 2026 05:26:26 +0000 (0:00:01.186) 0:12:32.663 ******** 2026-03-28 05:26:32.347097 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:26:32.347113 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:26:32.347130 | orchestrator | 2026-03-28 05:26:32.347148 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:26:32.347164 | orchestrator | Saturday 28 March 2026 05:26:28 +0000 (0:00:01.860) 0:12:34.524 ******** 2026-03-28 05:26:32.347180 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:26:32.347190 | orchestrator | 2026-03-28 05:26:32.347200 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:26:32.347210 | orchestrator | Saturday 28 March 2026 05:26:29 +0000 (0:00:01.492) 0:12:36.017 ******** 2026-03-28 05:26:32.347220 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.347230 | orchestrator | 2026-03-28 05:26:32.347265 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:26:32.347277 | orchestrator | Saturday 28 March 2026 05:26:30 +0000 (0:00:01.158) 0:12:37.176 ******** 2026-03-28 05:26:32.347287 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:26:32.347297 | orchestrator | 2026-03-28 05:26:32.347306 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:26:32.347316 | orchestrator | Saturday 28 March 2026 05:26:31 +0000 (0:00:00.778) 0:12:37.954 ******** 2026-03-28 05:26:32.347339 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376310 | orchestrator | 2026-03-28 05:27:13.376434 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:27:13.376452 | orchestrator | Saturday 28 March 2026 05:26:32 +0000 (0:00:00.811) 0:12:38.766 ******** 2026-03-28 05:27:13.376465 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-28 05:27:13.376478 | orchestrator | 2026-03-28 05:27:13.376489 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:27:13.376501 | orchestrator | Saturday 28 March 2026 05:26:33 +0000 (0:00:01.162) 0:12:39.929 ******** 2026-03-28 05:27:13.376512 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:27:13.376524 | orchestrator | 2026-03-28 05:27:13.376551 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:27:13.376564 | orchestrator | Saturday 28 March 2026 05:26:35 +0000 (0:00:01.804) 0:12:41.733 ******** 2026-03-28 05:27:13.376575 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:27:13.376587 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:27:13.376598 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:27:13.376609 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376620 | orchestrator | 2026-03-28 05:27:13.376632 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:27:13.376642 | orchestrator | Saturday 28 March 2026 05:26:36 +0000 (0:00:01.168) 0:12:42.902 ******** 2026-03-28 05:27:13.376653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376664 | orchestrator | 2026-03-28 05:27:13.376675 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:27:13.376686 | orchestrator | Saturday 28 March 2026 05:26:37 +0000 (0:00:01.199) 0:12:44.101 ******** 2026-03-28 05:27:13.376697 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376708 | orchestrator | 2026-03-28 05:27:13.376719 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:27:13.376730 | orchestrator | Saturday 28 March 2026 05:26:38 +0000 (0:00:01.174) 0:12:45.275 ******** 2026-03-28 05:27:13.376741 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376752 | orchestrator | 2026-03-28 05:27:13.376762 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:27:13.376843 | orchestrator | Saturday 28 March 2026 05:26:40 +0000 (0:00:01.234) 0:12:46.510 ******** 2026-03-28 05:27:13.376859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376871 | orchestrator | 2026-03-28 05:27:13.376884 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:27:13.376897 | orchestrator | Saturday 28 March 2026 05:26:41 +0000 (0:00:01.192) 0:12:47.702 ******** 2026-03-28 05:27:13.376909 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.376921 | orchestrator | 2026-03-28 05:27:13.376933 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:27:13.376946 | orchestrator | Saturday 28 March 2026 05:26:42 +0000 (0:00:00.792) 0:12:48.495 ******** 2026-03-28 05:27:13.376958 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:27:13.376970 | orchestrator | 2026-03-28 05:27:13.376983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:27:13.376995 | orchestrator | Saturday 28 March 2026 05:26:44 +0000 (0:00:02.252) 0:12:50.748 ******** 2026-03-28 05:27:13.377008 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:27:13.377020 | orchestrator | 2026-03-28 05:27:13.377033 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:27:13.377045 | orchestrator | Saturday 28 March 2026 05:26:45 +0000 (0:00:00.849) 0:12:51.597 ******** 2026-03-28 05:27:13.377057 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-28 05:27:13.377069 | orchestrator | 2026-03-28 05:27:13.377082 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:27:13.377094 | orchestrator | Saturday 28 March 2026 05:26:46 +0000 (0:00:01.186) 0:12:52.784 ******** 2026-03-28 05:27:13.377106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377119 | orchestrator | 2026-03-28 05:27:13.377131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:27:13.377141 | orchestrator | Saturday 28 March 2026 05:26:47 +0000 (0:00:01.202) 0:12:53.987 ******** 2026-03-28 05:27:13.377152 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377204 | orchestrator | 2026-03-28 05:27:13.377217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:27:13.377228 | orchestrator | Saturday 28 March 2026 05:26:48 +0000 (0:00:01.136) 0:12:55.123 ******** 2026-03-28 05:27:13.377239 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377250 | orchestrator | 2026-03-28 05:27:13.377260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:27:13.377271 | orchestrator | Saturday 28 March 2026 05:26:49 +0000 (0:00:01.192) 0:12:56.316 ******** 2026-03-28 05:27:13.377282 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377293 | orchestrator | 2026-03-28 05:27:13.377303 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:27:13.377314 | orchestrator | Saturday 28 March 2026 05:26:51 +0000 (0:00:01.176) 0:12:57.493 ******** 2026-03-28 05:27:13.377325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377336 | orchestrator | 2026-03-28 05:27:13.377346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:27:13.377357 | orchestrator | Saturday 28 March 2026 05:26:52 +0000 (0:00:01.317) 0:12:58.811 ******** 2026-03-28 05:27:13.377368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377379 | orchestrator | 2026-03-28 05:27:13.377389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:27:13.377400 | orchestrator | Saturday 28 March 2026 05:26:53 +0000 (0:00:01.146) 0:12:59.958 ******** 2026-03-28 05:27:13.377411 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377422 | orchestrator | 2026-03-28 05:27:13.377433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:27:13.377463 | orchestrator | Saturday 28 March 2026 05:26:54 +0000 (0:00:01.157) 0:13:01.116 ******** 2026-03-28 05:27:13.377474 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377485 | orchestrator | 2026-03-28 05:27:13.377496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:27:13.377516 | orchestrator | Saturday 28 March 2026 05:26:55 +0000 (0:00:01.242) 0:13:02.358 ******** 2026-03-28 05:27:13.377527 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:27:13.377538 | orchestrator | 2026-03-28 05:27:13.377548 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:27:13.377559 | orchestrator | Saturday 28 March 2026 05:26:56 +0000 (0:00:00.861) 0:13:03.220 ******** 2026-03-28 05:27:13.377576 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-28 05:27:13.377588 | orchestrator | 2026-03-28 05:27:13.377598 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:27:13.377609 | orchestrator | Saturday 28 March 2026 05:26:57 +0000 (0:00:01.174) 0:13:04.394 ******** 2026-03-28 05:27:13.377620 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 05:27:13.377631 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 05:27:13.377642 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 05:27:13.377653 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 05:27:13.377663 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 05:27:13.377674 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 05:27:13.377685 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 05:27:13.377696 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:27:13.377707 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:27:13.377718 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:27:13.377729 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:27:13.377739 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:27:13.377750 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:27:13.377760 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:27:13.377771 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 05:27:13.377782 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 05:27:13.377793 | orchestrator | 2026-03-28 05:27:13.377803 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:27:13.377814 | orchestrator | Saturday 28 March 2026 05:27:04 +0000 (0:00:06.488) 0:13:10.882 ******** 2026-03-28 05:27:13.377825 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377835 | orchestrator | 2026-03-28 05:27:13.377846 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:27:13.377857 | orchestrator | Saturday 28 March 2026 05:27:05 +0000 (0:00:00.809) 0:13:11.692 ******** 2026-03-28 05:27:13.377867 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377878 | orchestrator | 2026-03-28 05:27:13.377888 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:27:13.377899 | orchestrator | Saturday 28 March 2026 05:27:06 +0000 (0:00:00.806) 0:13:12.498 ******** 2026-03-28 05:27:13.377910 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377920 | orchestrator | 2026-03-28 05:27:13.377931 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:27:13.377942 | orchestrator | Saturday 28 March 2026 05:27:06 +0000 (0:00:00.779) 0:13:13.278 ******** 2026-03-28 05:27:13.377953 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.377963 | orchestrator | 2026-03-28 05:27:13.377974 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:27:13.377985 | orchestrator | Saturday 28 March 2026 05:27:07 +0000 (0:00:00.814) 0:13:14.092 ******** 2026-03-28 05:27:13.377995 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378006 | orchestrator | 2026-03-28 05:27:13.378056 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:27:13.378070 | orchestrator | Saturday 28 March 2026 05:27:08 +0000 (0:00:00.919) 0:13:15.012 ******** 2026-03-28 05:27:13.378088 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378099 | orchestrator | 2026-03-28 05:27:13.378110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:27:13.378121 | orchestrator | Saturday 28 March 2026 05:27:09 +0000 (0:00:00.802) 0:13:15.814 ******** 2026-03-28 05:27:13.378131 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378142 | orchestrator | 2026-03-28 05:27:13.378153 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:27:13.378188 | orchestrator | Saturday 28 March 2026 05:27:10 +0000 (0:00:00.825) 0:13:16.640 ******** 2026-03-28 05:27:13.378199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378210 | orchestrator | 2026-03-28 05:27:13.378221 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:27:13.378232 | orchestrator | Saturday 28 March 2026 05:27:11 +0000 (0:00:00.824) 0:13:17.464 ******** 2026-03-28 05:27:13.378242 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378253 | orchestrator | 2026-03-28 05:27:13.378264 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:27:13.378275 | orchestrator | Saturday 28 March 2026 05:27:11 +0000 (0:00:00.779) 0:13:18.244 ******** 2026-03-28 05:27:13.378285 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378296 | orchestrator | 2026-03-28 05:27:13.378307 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:27:13.378318 | orchestrator | Saturday 28 March 2026 05:27:12 +0000 (0:00:00.778) 0:13:19.022 ******** 2026-03-28 05:27:13.378328 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:27:13.378339 | orchestrator | 2026-03-28 05:27:13.378358 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:28:02.040061 | orchestrator | Saturday 28 March 2026 05:27:13 +0000 (0:00:00.771) 0:13:19.794 ******** 2026-03-28 05:28:02.040205 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040223 | orchestrator | 2026-03-28 05:28:02.040236 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:28:02.040248 | orchestrator | Saturday 28 March 2026 05:27:14 +0000 (0:00:00.840) 0:13:20.635 ******** 2026-03-28 05:28:02.040260 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040271 | orchestrator | 2026-03-28 05:28:02.040282 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:28:02.040294 | orchestrator | Saturday 28 March 2026 05:27:15 +0000 (0:00:00.879) 0:13:21.514 ******** 2026-03-28 05:28:02.040322 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040333 | orchestrator | 2026-03-28 05:28:02.040345 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:28:02.040356 | orchestrator | Saturday 28 March 2026 05:27:16 +0000 (0:00:00.947) 0:13:22.461 ******** 2026-03-28 05:28:02.040367 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040378 | orchestrator | 2026-03-28 05:28:02.040389 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:28:02.040400 | orchestrator | Saturday 28 March 2026 05:27:16 +0000 (0:00:00.919) 0:13:23.382 ******** 2026-03-28 05:28:02.040426 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040448 | orchestrator | 2026-03-28 05:28:02.040459 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:28:02.040470 | orchestrator | Saturday 28 March 2026 05:27:17 +0000 (0:00:00.836) 0:13:24.218 ******** 2026-03-28 05:28:02.040481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040492 | orchestrator | 2026-03-28 05:28:02.040504 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:28:02.040516 | orchestrator | Saturday 28 March 2026 05:27:18 +0000 (0:00:00.784) 0:13:25.002 ******** 2026-03-28 05:28:02.040527 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040559 | orchestrator | 2026-03-28 05:28:02.040574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:28:02.040588 | orchestrator | Saturday 28 March 2026 05:27:19 +0000 (0:00:00.776) 0:13:25.779 ******** 2026-03-28 05:28:02.040601 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040614 | orchestrator | 2026-03-28 05:28:02.040627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:28:02.040640 | orchestrator | Saturday 28 March 2026 05:27:20 +0000 (0:00:00.874) 0:13:26.654 ******** 2026-03-28 05:28:02.040652 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040665 | orchestrator | 2026-03-28 05:28:02.040678 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:28:02.040690 | orchestrator | Saturday 28 March 2026 05:27:21 +0000 (0:00:00.846) 0:13:27.501 ******** 2026-03-28 05:28:02.040703 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040716 | orchestrator | 2026-03-28 05:28:02.040729 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:28:02.040742 | orchestrator | Saturday 28 March 2026 05:27:21 +0000 (0:00:00.790) 0:13:28.292 ******** 2026-03-28 05:28:02.040755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:28:02.040768 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:28:02.040781 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:28:02.040794 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040807 | orchestrator | 2026-03-28 05:28:02.040820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:28:02.040833 | orchestrator | Saturday 28 March 2026 05:27:22 +0000 (0:00:01.065) 0:13:29.358 ******** 2026-03-28 05:28:02.040846 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:28:02.040859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:28:02.040872 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:28:02.040884 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040897 | orchestrator | 2026-03-28 05:28:02.040910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:28:02.040923 | orchestrator | Saturday 28 March 2026 05:27:24 +0000 (0:00:01.146) 0:13:30.504 ******** 2026-03-28 05:28:02.040933 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:28:02.040944 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:28:02.040955 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:28:02.040966 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.040977 | orchestrator | 2026-03-28 05:28:02.040988 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:28:02.040999 | orchestrator | Saturday 28 March 2026 05:27:25 +0000 (0:00:01.218) 0:13:31.723 ******** 2026-03-28 05:28:02.041010 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.041021 | orchestrator | 2026-03-28 05:28:02.041032 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:28:02.041043 | orchestrator | Saturday 28 March 2026 05:27:26 +0000 (0:00:00.799) 0:13:32.523 ******** 2026-03-28 05:28:02.041054 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 05:28:02.041065 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.041142 | orchestrator | 2026-03-28 05:28:02.041156 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:28:02.041167 | orchestrator | Saturday 28 March 2026 05:27:27 +0000 (0:00:00.949) 0:13:33.472 ******** 2026-03-28 05:28:02.041177 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:28:02.041188 | orchestrator | 2026-03-28 05:28:02.041199 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:28:02.041210 | orchestrator | Saturday 28 March 2026 05:27:28 +0000 (0:00:01.417) 0:13:34.890 ******** 2026-03-28 05:28:02.041221 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041240 | orchestrator | 2026-03-28 05:28:02.041251 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 05:28:02.041281 | orchestrator | Saturday 28 March 2026 05:27:29 +0000 (0:00:00.913) 0:13:35.804 ******** 2026-03-28 05:28:02.041292 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-28 05:28:02.041304 | orchestrator | 2026-03-28 05:28:02.041315 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 05:28:02.041326 | orchestrator | Saturday 28 March 2026 05:27:30 +0000 (0:00:01.457) 0:13:37.262 ******** 2026-03-28 05:28:02.041337 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-28 05:28:02.041348 | orchestrator | 2026-03-28 05:28:02.041359 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 05:28:02.041376 | orchestrator | Saturday 28 March 2026 05:27:34 +0000 (0:00:03.254) 0:13:40.516 ******** 2026-03-28 05:28:02.041387 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.041398 | orchestrator | 2026-03-28 05:28:02.041409 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 05:28:02.041420 | orchestrator | Saturday 28 March 2026 05:27:35 +0000 (0:00:01.224) 0:13:41.741 ******** 2026-03-28 05:28:02.041431 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041442 | orchestrator | 2026-03-28 05:28:02.041453 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 05:28:02.041463 | orchestrator | Saturday 28 March 2026 05:27:36 +0000 (0:00:01.215) 0:13:42.957 ******** 2026-03-28 05:28:02.041474 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041485 | orchestrator | 2026-03-28 05:28:02.041496 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 05:28:02.041506 | orchestrator | Saturday 28 March 2026 05:27:37 +0000 (0:00:01.178) 0:13:44.135 ******** 2026-03-28 05:28:02.041517 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:28:02.041528 | orchestrator | 2026-03-28 05:28:02.041539 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 05:28:02.041549 | orchestrator | Saturday 28 March 2026 05:27:39 +0000 (0:00:02.132) 0:13:46.268 ******** 2026-03-28 05:28:02.041560 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041571 | orchestrator | 2026-03-28 05:28:02.041582 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 05:28:02.041592 | orchestrator | Saturday 28 March 2026 05:27:41 +0000 (0:00:01.689) 0:13:47.958 ******** 2026-03-28 05:28:02.041603 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041614 | orchestrator | 2026-03-28 05:28:02.041625 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 05:28:02.041635 | orchestrator | Saturday 28 March 2026 05:27:43 +0000 (0:00:01.547) 0:13:49.505 ******** 2026-03-28 05:28:02.041646 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041658 | orchestrator | 2026-03-28 05:28:02.041667 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 05:28:02.041677 | orchestrator | Saturday 28 March 2026 05:27:44 +0000 (0:00:01.589) 0:13:51.095 ******** 2026-03-28 05:28:02.041687 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:28:02.041696 | orchestrator | 2026-03-28 05:28:02.041706 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 05:28:02.041715 | orchestrator | Saturday 28 March 2026 05:27:46 +0000 (0:00:01.675) 0:13:52.771 ******** 2026-03-28 05:28:02.041725 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:28:02.041735 | orchestrator | 2026-03-28 05:28:02.041744 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 05:28:02.041754 | orchestrator | Saturday 28 March 2026 05:27:48 +0000 (0:00:01.668) 0:13:54.439 ******** 2026-03-28 05:28:02.041763 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:28:02.041773 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 05:28:02.041783 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 05:28:02.041798 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-28 05:28:02.041808 | orchestrator | 2026-03-28 05:28:02.041818 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 05:28:02.041827 | orchestrator | Saturday 28 March 2026 05:27:52 +0000 (0:00:04.321) 0:13:58.761 ******** 2026-03-28 05:28:02.041837 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:28:02.041847 | orchestrator | 2026-03-28 05:28:02.041856 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 05:28:02.041866 | orchestrator | Saturday 28 March 2026 05:27:54 +0000 (0:00:02.023) 0:14:00.785 ******** 2026-03-28 05:28:02.041876 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041885 | orchestrator | 2026-03-28 05:28:02.041895 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 05:28:02.041904 | orchestrator | Saturday 28 March 2026 05:27:55 +0000 (0:00:01.283) 0:14:02.069 ******** 2026-03-28 05:28:02.041914 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041923 | orchestrator | 2026-03-28 05:28:02.041933 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 05:28:02.041942 | orchestrator | Saturday 28 March 2026 05:27:56 +0000 (0:00:01.150) 0:14:03.219 ******** 2026-03-28 05:28:02.041952 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.041962 | orchestrator | 2026-03-28 05:28:02.041971 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 05:28:02.041981 | orchestrator | Saturday 28 March 2026 05:27:58 +0000 (0:00:01.829) 0:14:05.049 ******** 2026-03-28 05:28:02.041991 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:28:02.042000 | orchestrator | 2026-03-28 05:28:02.042010 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 05:28:02.042104 | orchestrator | Saturday 28 March 2026 05:28:00 +0000 (0:00:01.462) 0:14:06.511 ******** 2026-03-28 05:28:02.042114 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:28:02.042124 | orchestrator | 2026-03-28 05:28:02.042133 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 05:28:02.042143 | orchestrator | Saturday 28 March 2026 05:28:00 +0000 (0:00:00.786) 0:14:07.298 ******** 2026-03-28 05:28:02.042153 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-28 05:28:02.042162 | orchestrator | 2026-03-28 05:28:02.042180 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 05:29:09.080484 | orchestrator | Saturday 28 March 2026 05:28:02 +0000 (0:00:01.163) 0:14:08.462 ******** 2026-03-28 05:29:09.080607 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.080623 | orchestrator | 2026-03-28 05:29:09.080635 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 05:29:09.080645 | orchestrator | Saturday 28 March 2026 05:28:03 +0000 (0:00:01.130) 0:14:09.592 ******** 2026-03-28 05:29:09.080656 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.080666 | orchestrator | 2026-03-28 05:29:09.080676 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 05:29:09.080702 | orchestrator | Saturday 28 March 2026 05:28:04 +0000 (0:00:01.121) 0:14:10.714 ******** 2026-03-28 05:29:09.080713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-28 05:29:09.080723 | orchestrator | 2026-03-28 05:29:09.080733 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 05:29:09.080743 | orchestrator | Saturday 28 March 2026 05:28:05 +0000 (0:00:01.153) 0:14:11.868 ******** 2026-03-28 05:29:09.080758 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.080775 | orchestrator | 2026-03-28 05:29:09.080792 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 05:29:09.080808 | orchestrator | Saturday 28 March 2026 05:28:08 +0000 (0:00:02.780) 0:14:14.648 ******** 2026-03-28 05:29:09.080825 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.080842 | orchestrator | 2026-03-28 05:29:09.080860 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 05:29:09.080901 | orchestrator | Saturday 28 March 2026 05:28:10 +0000 (0:00:01.955) 0:14:16.604 ******** 2026-03-28 05:29:09.080912 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.080921 | orchestrator | 2026-03-28 05:29:09.080931 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 05:29:09.080941 | orchestrator | Saturday 28 March 2026 05:28:12 +0000 (0:00:02.493) 0:14:19.098 ******** 2026-03-28 05:29:09.080951 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:29:09.080961 | orchestrator | 2026-03-28 05:29:09.081122 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 05:29:09.081137 | orchestrator | Saturday 28 March 2026 05:28:15 +0000 (0:00:02.774) 0:14:21.873 ******** 2026-03-28 05:29:09.081149 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-28 05:29:09.081161 | orchestrator | 2026-03-28 05:29:09.081173 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 05:29:09.081185 | orchestrator | Saturday 28 March 2026 05:28:16 +0000 (0:00:01.115) 0:14:22.989 ******** 2026-03-28 05:29:09.081196 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-28 05:29:09.081208 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.081219 | orchestrator | 2026-03-28 05:29:09.081231 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 05:29:09.081242 | orchestrator | Saturday 28 March 2026 05:28:39 +0000 (0:00:22.987) 0:14:45.977 ******** 2026-03-28 05:29:09.081254 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.081265 | orchestrator | 2026-03-28 05:29:09.081278 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 05:29:09.081289 | orchestrator | Saturday 28 March 2026 05:28:42 +0000 (0:00:02.793) 0:14:48.771 ******** 2026-03-28 05:29:09.081300 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081311 | orchestrator | 2026-03-28 05:29:09.081322 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 05:29:09.081334 | orchestrator | Saturday 28 March 2026 05:28:43 +0000 (0:00:00.797) 0:14:49.568 ******** 2026-03-28 05:29:09.081348 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:29:09.081363 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:29:09.081376 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 05:29:09.081386 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 05:29:09.081417 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 05:29:09.081445 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}])  2026-03-28 05:29:09.081457 | orchestrator | 2026-03-28 05:29:09.081467 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-28 05:29:09.081478 | orchestrator | Saturday 28 March 2026 05:28:52 +0000 (0:00:09.365) 0:14:58.934 ******** 2026-03-28 05:29:09.081487 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:29:09.081497 | orchestrator | 2026-03-28 05:29:09.081507 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:29:09.081517 | orchestrator | Saturday 28 March 2026 05:28:54 +0000 (0:00:02.279) 0:15:01.214 ******** 2026-03-28 05:29:09.081526 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:29:09.081536 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-28 05:29:09.081546 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-28 05:29:09.081556 | orchestrator | 2026-03-28 05:29:09.081565 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:29:09.081575 | orchestrator | Saturday 28 March 2026 05:28:56 +0000 (0:00:02.049) 0:15:03.264 ******** 2026-03-28 05:29:09.081585 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:29:09.081595 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:29:09.081605 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:29:09.081615 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081625 | orchestrator | 2026-03-28 05:29:09.081635 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-28 05:29:09.081645 | orchestrator | Saturday 28 March 2026 05:28:58 +0000 (0:00:01.617) 0:15:04.881 ******** 2026-03-28 05:29:09.081655 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081664 | orchestrator | 2026-03-28 05:29:09.081674 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-28 05:29:09.081684 | orchestrator | Saturday 28 March 2026 05:28:59 +0000 (0:00:00.897) 0:15:05.778 ******** 2026-03-28 05:29:09.081693 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:29:09.081703 | orchestrator | 2026-03-28 05:29:09.081713 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 05:29:09.081722 | orchestrator | Saturday 28 March 2026 05:29:01 +0000 (0:00:01.939) 0:15:07.718 ******** 2026-03-28 05:29:09.081732 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081742 | orchestrator | 2026-03-28 05:29:09.081751 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 05:29:09.081764 | orchestrator | Saturday 28 March 2026 05:29:02 +0000 (0:00:00.848) 0:15:08.566 ******** 2026-03-28 05:29:09.081781 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081799 | orchestrator | 2026-03-28 05:29:09.081816 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 05:29:09.081833 | orchestrator | Saturday 28 March 2026 05:29:02 +0000 (0:00:00.815) 0:15:09.382 ******** 2026-03-28 05:29:09.081849 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081865 | orchestrator | 2026-03-28 05:29:09.081882 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 05:29:09.081899 | orchestrator | Saturday 28 March 2026 05:29:03 +0000 (0:00:00.791) 0:15:10.173 ******** 2026-03-28 05:29:09.081915 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.081934 | orchestrator | 2026-03-28 05:29:09.081951 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 05:29:09.082013 | orchestrator | Saturday 28 March 2026 05:29:04 +0000 (0:00:00.782) 0:15:10.955 ******** 2026-03-28 05:29:09.082087 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.082098 | orchestrator | 2026-03-28 05:29:09.082108 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 05:29:09.082117 | orchestrator | Saturday 28 March 2026 05:29:05 +0000 (0:00:00.828) 0:15:11.784 ******** 2026-03-28 05:29:09.082127 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.082137 | orchestrator | 2026-03-28 05:29:09.082147 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 05:29:09.082156 | orchestrator | Saturday 28 March 2026 05:29:06 +0000 (0:00:00.761) 0:15:12.546 ******** 2026-03-28 05:29:09.082166 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:29:09.082175 | orchestrator | 2026-03-28 05:29:09.082185 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-28 05:29:09.082195 | orchestrator | 2026-03-28 05:29:09.082205 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-28 05:29:09.082214 | orchestrator | Saturday 28 March 2026 05:29:07 +0000 (0:00:01.035) 0:15:13.582 ******** 2026-03-28 05:29:09.082224 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:09.082234 | orchestrator | 2026-03-28 05:29:09.082244 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-28 05:29:09.082253 | orchestrator | Saturday 28 March 2026 05:29:08 +0000 (0:00:01.135) 0:15:14.718 ******** 2026-03-28 05:29:09.082263 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:09.082273 | orchestrator | 2026-03-28 05:29:09.082283 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-28 05:29:09.082303 | orchestrator | Saturday 28 March 2026 05:29:09 +0000 (0:00:00.780) 0:15:15.499 ******** 2026-03-28 05:29:35.069088 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:35.069195 | orchestrator | 2026-03-28 05:29:35.069206 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-28 05:29:35.069251 | orchestrator | Saturday 28 March 2026 05:29:09 +0000 (0:00:00.876) 0:15:16.375 ******** 2026-03-28 05:29:35.069260 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069268 | orchestrator | 2026-03-28 05:29:35.069275 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:29:35.069294 | orchestrator | Saturday 28 March 2026 05:29:10 +0000 (0:00:00.829) 0:15:17.204 ******** 2026-03-28 05:29:35.069301 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-28 05:29:35.069308 | orchestrator | 2026-03-28 05:29:35.069314 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:29:35.069320 | orchestrator | Saturday 28 March 2026 05:29:11 +0000 (0:00:01.117) 0:15:18.322 ******** 2026-03-28 05:29:35.069327 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069333 | orchestrator | 2026-03-28 05:29:35.069339 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:29:35.069346 | orchestrator | Saturday 28 March 2026 05:29:13 +0000 (0:00:01.502) 0:15:19.825 ******** 2026-03-28 05:29:35.069352 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069358 | orchestrator | 2026-03-28 05:29:35.069364 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:29:35.069371 | orchestrator | Saturday 28 March 2026 05:29:14 +0000 (0:00:01.257) 0:15:21.083 ******** 2026-03-28 05:29:35.069377 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069383 | orchestrator | 2026-03-28 05:29:35.069389 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:29:35.069396 | orchestrator | Saturday 28 March 2026 05:29:16 +0000 (0:00:01.539) 0:15:22.623 ******** 2026-03-28 05:29:35.069402 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069408 | orchestrator | 2026-03-28 05:29:35.069415 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:29:35.069421 | orchestrator | Saturday 28 March 2026 05:29:17 +0000 (0:00:01.210) 0:15:23.833 ******** 2026-03-28 05:29:35.069469 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069477 | orchestrator | 2026-03-28 05:29:35.069484 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:29:35.069490 | orchestrator | Saturday 28 March 2026 05:29:18 +0000 (0:00:01.158) 0:15:24.992 ******** 2026-03-28 05:29:35.069496 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069502 | orchestrator | 2026-03-28 05:29:35.069509 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:29:35.069516 | orchestrator | Saturday 28 March 2026 05:29:19 +0000 (0:00:01.151) 0:15:26.145 ******** 2026-03-28 05:29:35.069523 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:35.069529 | orchestrator | 2026-03-28 05:29:35.069536 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:29:35.069542 | orchestrator | Saturday 28 March 2026 05:29:20 +0000 (0:00:01.200) 0:15:27.346 ******** 2026-03-28 05:29:35.069549 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069555 | orchestrator | 2026-03-28 05:29:35.069561 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:29:35.069567 | orchestrator | Saturday 28 March 2026 05:29:22 +0000 (0:00:01.149) 0:15:28.495 ******** 2026-03-28 05:29:35.069574 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:29:35.069580 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:29:35.069586 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:29:35.069593 | orchestrator | 2026-03-28 05:29:35.069599 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:29:35.069605 | orchestrator | Saturday 28 March 2026 05:29:24 +0000 (0:00:02.076) 0:15:30.571 ******** 2026-03-28 05:29:35.069611 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:35.069618 | orchestrator | 2026-03-28 05:29:35.069624 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:29:35.069630 | orchestrator | Saturday 28 March 2026 05:29:25 +0000 (0:00:01.751) 0:15:32.323 ******** 2026-03-28 05:29:35.069638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:29:35.069645 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:29:35.069652 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:29:35.069659 | orchestrator | 2026-03-28 05:29:35.069667 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:29:35.069674 | orchestrator | Saturday 28 March 2026 05:29:29 +0000 (0:00:03.564) 0:15:35.887 ******** 2026-03-28 05:29:35.069681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:29:35.069690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:29:35.069697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:29:35.069704 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:35.069712 | orchestrator | 2026-03-28 05:29:35.069719 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:29:35.069727 | orchestrator | Saturday 28 March 2026 05:29:30 +0000 (0:00:01.496) 0:15:37.384 ******** 2026-03-28 05:29:35.069737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069750 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:35.069795 | orchestrator | 2026-03-28 05:29:35.069801 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:29:35.069811 | orchestrator | Saturday 28 March 2026 05:29:32 +0000 (0:00:01.642) 0:15:39.026 ******** 2026-03-28 05:29:35.069819 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069836 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:35.069842 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:35.069848 | orchestrator | 2026-03-28 05:29:35.069855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:29:35.069861 | orchestrator | Saturday 28 March 2026 05:29:33 +0000 (0:00:01.248) 0:15:40.275 ******** 2026-03-28 05:29:35.069870 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:29:26.887207', 'end': '2026-03-28 05:29:26.931472', 'delta': '0:00:00.044265', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:29:35.069879 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:29:27.514549', 'end': '2026-03-28 05:29:27.549402', 'delta': '0:00:00.034853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:29:35.069891 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '99ef085e2de2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:29:28.075051', 'end': '2026-03-28 05:29:28.125282', 'delta': '0:00:00.050231', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['99ef085e2de2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:29:54.086329 | orchestrator | 2026-03-28 05:29:54.086421 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:29:54.086433 | orchestrator | Saturday 28 March 2026 05:29:35 +0000 (0:00:01.202) 0:15:41.478 ******** 2026-03-28 05:29:54.086440 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:54.086448 | orchestrator | 2026-03-28 05:29:54.086455 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:29:54.086475 | orchestrator | Saturday 28 March 2026 05:29:36 +0000 (0:00:01.345) 0:15:42.823 ******** 2026-03-28 05:29:54.086482 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086489 | orchestrator | 2026-03-28 05:29:54.086495 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:29:54.086502 | orchestrator | Saturday 28 March 2026 05:29:37 +0000 (0:00:01.291) 0:15:44.115 ******** 2026-03-28 05:29:54.086508 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:54.086515 | orchestrator | 2026-03-28 05:29:54.086522 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:29:54.086528 | orchestrator | Saturday 28 March 2026 05:29:38 +0000 (0:00:01.180) 0:15:45.296 ******** 2026-03-28 05:29:54.086535 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:29:54.086541 | orchestrator | 2026-03-28 05:29:54.086547 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:29:54.086554 | orchestrator | Saturday 28 March 2026 05:29:40 +0000 (0:00:02.029) 0:15:47.326 ******** 2026-03-28 05:29:54.086560 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:29:54.086566 | orchestrator | 2026-03-28 05:29:54.086573 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:29:54.086579 | orchestrator | Saturday 28 March 2026 05:29:42 +0000 (0:00:01.181) 0:15:48.507 ******** 2026-03-28 05:29:54.086586 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086592 | orchestrator | 2026-03-28 05:29:54.086598 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:29:54.086605 | orchestrator | Saturday 28 March 2026 05:29:43 +0000 (0:00:01.173) 0:15:49.681 ******** 2026-03-28 05:29:54.086611 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086617 | orchestrator | 2026-03-28 05:29:54.086624 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:29:54.086630 | orchestrator | Saturday 28 March 2026 05:29:44 +0000 (0:00:01.304) 0:15:50.986 ******** 2026-03-28 05:29:54.086637 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086643 | orchestrator | 2026-03-28 05:29:54.086649 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:29:54.086655 | orchestrator | Saturday 28 March 2026 05:29:45 +0000 (0:00:01.200) 0:15:52.187 ******** 2026-03-28 05:29:54.086662 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086668 | orchestrator | 2026-03-28 05:29:54.086675 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:29:54.086681 | orchestrator | Saturday 28 March 2026 05:29:47 +0000 (0:00:01.251) 0:15:53.439 ******** 2026-03-28 05:29:54.086688 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086694 | orchestrator | 2026-03-28 05:29:54.086700 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:29:54.086707 | orchestrator | Saturday 28 March 2026 05:29:48 +0000 (0:00:01.181) 0:15:54.620 ******** 2026-03-28 05:29:54.086713 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086719 | orchestrator | 2026-03-28 05:29:54.086726 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:29:54.086732 | orchestrator | Saturday 28 March 2026 05:29:49 +0000 (0:00:01.140) 0:15:55.761 ******** 2026-03-28 05:29:54.086738 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086760 | orchestrator | 2026-03-28 05:29:54.086767 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:29:54.086773 | orchestrator | Saturday 28 March 2026 05:29:50 +0000 (0:00:01.130) 0:15:56.891 ******** 2026-03-28 05:29:54.086779 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086786 | orchestrator | 2026-03-28 05:29:54.086792 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:29:54.086799 | orchestrator | Saturday 28 March 2026 05:29:51 +0000 (0:00:01.134) 0:15:58.025 ******** 2026-03-28 05:29:54.086805 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:54.086811 | orchestrator | 2026-03-28 05:29:54.086817 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:29:54.086823 | orchestrator | Saturday 28 March 2026 05:29:52 +0000 (0:00:01.183) 0:15:59.209 ******** 2026-03-28 05:29:54.086832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.086841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.086860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.086872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:29:54.086881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.086889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.086900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:54.087014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:29:55.390290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:55.390383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:29:55.390397 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:29:55.390408 | orchestrator | 2026-03-28 05:29:55.390418 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:29:55.390428 | orchestrator | Saturday 28 March 2026 05:29:54 +0000 (0:00:01.292) 0:16:00.502 ******** 2026-03-28 05:29:55.390439 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390470 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390480 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390490 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390534 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390544 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:29:55.390586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:30:31.281807 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:30:31.282100 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282127 | orchestrator | 2026-03-28 05:30:31.282142 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:30:31.282155 | orchestrator | Saturday 28 March 2026 05:29:55 +0000 (0:00:01.308) 0:16:01.811 ******** 2026-03-28 05:30:31.282167 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:30:31.282180 | orchestrator | 2026-03-28 05:30:31.282220 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:30:31.282232 | orchestrator | Saturday 28 March 2026 05:29:56 +0000 (0:00:01.543) 0:16:03.355 ******** 2026-03-28 05:30:31.282243 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:30:31.282254 | orchestrator | 2026-03-28 05:30:31.282265 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:30:31.282276 | orchestrator | Saturday 28 March 2026 05:29:58 +0000 (0:00:01.159) 0:16:04.514 ******** 2026-03-28 05:30:31.282287 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:30:31.282298 | orchestrator | 2026-03-28 05:30:31.282311 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:30:31.282324 | orchestrator | Saturday 28 March 2026 05:29:59 +0000 (0:00:01.600) 0:16:06.115 ******** 2026-03-28 05:30:31.282337 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282349 | orchestrator | 2026-03-28 05:30:31.282362 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:30:31.282375 | orchestrator | Saturday 28 March 2026 05:30:00 +0000 (0:00:01.186) 0:16:07.302 ******** 2026-03-28 05:30:31.282387 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282399 | orchestrator | 2026-03-28 05:30:31.282411 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:30:31.282424 | orchestrator | Saturday 28 March 2026 05:30:02 +0000 (0:00:01.341) 0:16:08.644 ******** 2026-03-28 05:30:31.282436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282448 | orchestrator | 2026-03-28 05:30:31.282461 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:30:31.282473 | orchestrator | Saturday 28 March 2026 05:30:03 +0000 (0:00:01.321) 0:16:09.966 ******** 2026-03-28 05:30:31.282486 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 05:30:31.282499 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 05:30:31.282512 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:30:31.282524 | orchestrator | 2026-03-28 05:30:31.282537 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:30:31.282550 | orchestrator | Saturday 28 March 2026 05:30:05 +0000 (0:00:01.779) 0:16:11.745 ******** 2026-03-28 05:30:31.282562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:30:31.282575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:30:31.282587 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:30:31.282601 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282613 | orchestrator | 2026-03-28 05:30:31.282626 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:30:31.282639 | orchestrator | Saturday 28 March 2026 05:30:06 +0000 (0:00:01.230) 0:16:12.976 ******** 2026-03-28 05:30:31.282652 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.282663 | orchestrator | 2026-03-28 05:30:31.282674 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:30:31.282684 | orchestrator | Saturday 28 March 2026 05:30:07 +0000 (0:00:01.125) 0:16:14.102 ******** 2026-03-28 05:30:31.282695 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:30:31.282707 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:30:31.282725 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:30:31.282743 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:30:31.282760 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:30:31.282778 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:30:31.282796 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:30:31.282813 | orchestrator | 2026-03-28 05:30:31.282831 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:30:31.282863 | orchestrator | Saturday 28 March 2026 05:30:09 +0000 (0:00:01.782) 0:16:15.885 ******** 2026-03-28 05:30:31.282908 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:30:31.282929 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:30:31.282940 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:30:31.282951 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:30:31.283003 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:30:31.283016 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:30:31.283027 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:30:31.283038 | orchestrator | 2026-03-28 05:30:31.283049 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-28 05:30:31.283060 | orchestrator | Saturday 28 March 2026 05:30:11 +0000 (0:00:02.135) 0:16:18.020 ******** 2026-03-28 05:30:31.283071 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283082 | orchestrator | 2026-03-28 05:30:31.283093 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-28 05:30:31.283104 | orchestrator | Saturday 28 March 2026 05:30:12 +0000 (0:00:00.984) 0:16:19.004 ******** 2026-03-28 05:30:31.283115 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283126 | orchestrator | 2026-03-28 05:30:31.283137 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-28 05:30:31.283148 | orchestrator | Saturday 28 March 2026 05:30:13 +0000 (0:00:00.872) 0:16:19.877 ******** 2026-03-28 05:30:31.283159 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283170 | orchestrator | 2026-03-28 05:30:31.283181 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-28 05:30:31.283193 | orchestrator | Saturday 28 March 2026 05:30:14 +0000 (0:00:00.811) 0:16:20.688 ******** 2026-03-28 05:30:31.283203 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283214 | orchestrator | 2026-03-28 05:30:31.283225 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-28 05:30:31.283236 | orchestrator | Saturday 28 March 2026 05:30:15 +0000 (0:00:00.866) 0:16:21.555 ******** 2026-03-28 05:30:31.283247 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283265 | orchestrator | 2026-03-28 05:30:31.283282 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-28 05:30:31.283301 | orchestrator | Saturday 28 March 2026 05:30:15 +0000 (0:00:00.790) 0:16:22.346 ******** 2026-03-28 05:30:31.283318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:30:31.283336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:30:31.283354 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:30:31.283370 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283387 | orchestrator | 2026-03-28 05:30:31.283406 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-28 05:30:31.283425 | orchestrator | Saturday 28 March 2026 05:30:17 +0000 (0:00:01.597) 0:16:23.943 ******** 2026-03-28 05:30:31.283445 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-28 05:30:31.283463 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-28 05:30:31.283482 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-28 05:30:31.283493 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-28 05:30:31.283504 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-28 05:30:31.283515 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-28 05:30:31.283538 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283549 | orchestrator | 2026-03-28 05:30:31.283560 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-28 05:30:31.283571 | orchestrator | Saturday 28 March 2026 05:30:19 +0000 (0:00:01.952) 0:16:25.896 ******** 2026-03-28 05:30:31.283582 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:30:31.283593 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:30:31.283604 | orchestrator | 2026-03-28 05:30:31.283615 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-28 05:30:31.283626 | orchestrator | Saturday 28 March 2026 05:30:22 +0000 (0:00:03.095) 0:16:28.992 ******** 2026-03-28 05:30:31.283637 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:30:31.283648 | orchestrator | 2026-03-28 05:30:31.283659 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:30:31.283669 | orchestrator | Saturday 28 March 2026 05:30:24 +0000 (0:00:02.141) 0:16:31.133 ******** 2026-03-28 05:30:31.283680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-28 05:30:31.283693 | orchestrator | 2026-03-28 05:30:31.283709 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:30:31.283728 | orchestrator | Saturday 28 March 2026 05:30:25 +0000 (0:00:01.144) 0:16:32.277 ******** 2026-03-28 05:30:31.283745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-28 05:30:31.283763 | orchestrator | 2026-03-28 05:30:31.283780 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:30:31.283797 | orchestrator | Saturday 28 March 2026 05:30:27 +0000 (0:00:01.225) 0:16:33.503 ******** 2026-03-28 05:30:31.283816 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:30:31.283836 | orchestrator | 2026-03-28 05:30:31.283855 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:30:31.283874 | orchestrator | Saturday 28 March 2026 05:30:28 +0000 (0:00:01.750) 0:16:35.253 ******** 2026-03-28 05:30:31.283930 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.283948 | orchestrator | 2026-03-28 05:30:31.283966 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:30:31.283984 | orchestrator | Saturday 28 March 2026 05:30:29 +0000 (0:00:01.175) 0:16:36.429 ******** 2026-03-28 05:30:31.284001 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:30:31.284018 | orchestrator | 2026-03-28 05:30:31.284037 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:30:31.284082 | orchestrator | Saturday 28 March 2026 05:30:31 +0000 (0:00:01.268) 0:16:37.697 ******** 2026-03-28 05:31:14.860190 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860316 | orchestrator | 2026-03-28 05:31:14.860334 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:31:14.860347 | orchestrator | Saturday 28 March 2026 05:30:32 +0000 (0:00:01.249) 0:16:38.946 ******** 2026-03-28 05:31:14.860359 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.860371 | orchestrator | 2026-03-28 05:31:14.860383 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:31:14.860394 | orchestrator | Saturday 28 March 2026 05:30:34 +0000 (0:00:01.604) 0:16:40.550 ******** 2026-03-28 05:31:14.860405 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860416 | orchestrator | 2026-03-28 05:31:14.860427 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:31:14.860438 | orchestrator | Saturday 28 March 2026 05:30:35 +0000 (0:00:01.144) 0:16:41.695 ******** 2026-03-28 05:31:14.860449 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860460 | orchestrator | 2026-03-28 05:31:14.860471 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:31:14.860482 | orchestrator | Saturday 28 March 2026 05:30:36 +0000 (0:00:01.225) 0:16:42.920 ******** 2026-03-28 05:31:14.860493 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.860530 | orchestrator | 2026-03-28 05:31:14.860542 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:31:14.860553 | orchestrator | Saturday 28 March 2026 05:30:38 +0000 (0:00:01.597) 0:16:44.518 ******** 2026-03-28 05:31:14.860563 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.860574 | orchestrator | 2026-03-28 05:31:14.860585 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:31:14.860596 | orchestrator | Saturday 28 March 2026 05:30:39 +0000 (0:00:01.615) 0:16:46.134 ******** 2026-03-28 05:31:14.860607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860618 | orchestrator | 2026-03-28 05:31:14.860629 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:31:14.860640 | orchestrator | Saturday 28 March 2026 05:30:40 +0000 (0:00:00.829) 0:16:46.963 ******** 2026-03-28 05:31:14.860651 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.860662 | orchestrator | 2026-03-28 05:31:14.860673 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:31:14.860683 | orchestrator | Saturday 28 March 2026 05:30:41 +0000 (0:00:00.844) 0:16:47.808 ******** 2026-03-28 05:31:14.860694 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860705 | orchestrator | 2026-03-28 05:31:14.860719 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:31:14.860731 | orchestrator | Saturday 28 March 2026 05:30:42 +0000 (0:00:00.944) 0:16:48.753 ******** 2026-03-28 05:31:14.860744 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860755 | orchestrator | 2026-03-28 05:31:14.860768 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:31:14.860781 | orchestrator | Saturday 28 March 2026 05:30:43 +0000 (0:00:00.803) 0:16:49.556 ******** 2026-03-28 05:31:14.860793 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860805 | orchestrator | 2026-03-28 05:31:14.860818 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:31:14.860830 | orchestrator | Saturday 28 March 2026 05:30:43 +0000 (0:00:00.798) 0:16:50.354 ******** 2026-03-28 05:31:14.860843 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860855 | orchestrator | 2026-03-28 05:31:14.860867 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:31:14.860907 | orchestrator | Saturday 28 March 2026 05:30:44 +0000 (0:00:00.874) 0:16:51.229 ******** 2026-03-28 05:31:14.860921 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.860934 | orchestrator | 2026-03-28 05:31:14.860946 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:31:14.860959 | orchestrator | Saturday 28 March 2026 05:30:45 +0000 (0:00:00.797) 0:16:52.026 ******** 2026-03-28 05:31:14.860972 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.860984 | orchestrator | 2026-03-28 05:31:14.860997 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:31:14.861009 | orchestrator | Saturday 28 March 2026 05:30:46 +0000 (0:00:00.811) 0:16:52.838 ******** 2026-03-28 05:31:14.861028 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.861048 | orchestrator | 2026-03-28 05:31:14.861069 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:31:14.861089 | orchestrator | Saturday 28 March 2026 05:30:47 +0000 (0:00:00.844) 0:16:53.683 ******** 2026-03-28 05:31:14.861102 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.861113 | orchestrator | 2026-03-28 05:31:14.861124 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:31:14.861135 | orchestrator | Saturday 28 March 2026 05:30:48 +0000 (0:00:01.018) 0:16:54.701 ******** 2026-03-28 05:31:14.861146 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861157 | orchestrator | 2026-03-28 05:31:14.861167 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:31:14.861178 | orchestrator | Saturday 28 March 2026 05:30:49 +0000 (0:00:00.840) 0:16:55.542 ******** 2026-03-28 05:31:14.861189 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861209 | orchestrator | 2026-03-28 05:31:14.861220 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:31:14.861231 | orchestrator | Saturday 28 March 2026 05:30:49 +0000 (0:00:00.833) 0:16:56.376 ******** 2026-03-28 05:31:14.861241 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861252 | orchestrator | 2026-03-28 05:31:14.861264 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:31:14.861274 | orchestrator | Saturday 28 March 2026 05:30:50 +0000 (0:00:00.841) 0:16:57.217 ******** 2026-03-28 05:31:14.861285 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861296 | orchestrator | 2026-03-28 05:31:14.861307 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:31:14.861318 | orchestrator | Saturday 28 March 2026 05:30:51 +0000 (0:00:00.838) 0:16:58.056 ******** 2026-03-28 05:31:14.861329 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861340 | orchestrator | 2026-03-28 05:31:14.861383 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:31:14.861395 | orchestrator | Saturday 28 March 2026 05:30:52 +0000 (0:00:00.805) 0:16:58.861 ******** 2026-03-28 05:31:14.861406 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861417 | orchestrator | 2026-03-28 05:31:14.861429 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:31:14.861440 | orchestrator | Saturday 28 March 2026 05:30:53 +0000 (0:00:00.824) 0:16:59.686 ******** 2026-03-28 05:31:14.861450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861461 | orchestrator | 2026-03-28 05:31:14.861472 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:31:14.861484 | orchestrator | Saturday 28 March 2026 05:30:54 +0000 (0:00:00.839) 0:17:00.525 ******** 2026-03-28 05:31:14.861495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861506 | orchestrator | 2026-03-28 05:31:14.861517 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:31:14.861528 | orchestrator | Saturday 28 March 2026 05:30:54 +0000 (0:00:00.826) 0:17:01.351 ******** 2026-03-28 05:31:14.861539 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861549 | orchestrator | 2026-03-28 05:31:14.861560 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:31:14.861571 | orchestrator | Saturday 28 March 2026 05:30:55 +0000 (0:00:00.800) 0:17:02.152 ******** 2026-03-28 05:31:14.861582 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861593 | orchestrator | 2026-03-28 05:31:14.861604 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:31:14.861615 | orchestrator | Saturday 28 March 2026 05:30:56 +0000 (0:00:00.767) 0:17:02.920 ******** 2026-03-28 05:31:14.861626 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861637 | orchestrator | 2026-03-28 05:31:14.861648 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:31:14.861659 | orchestrator | Saturday 28 March 2026 05:30:57 +0000 (0:00:00.802) 0:17:03.723 ******** 2026-03-28 05:31:14.861670 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861680 | orchestrator | 2026-03-28 05:31:14.861691 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:31:14.861702 | orchestrator | Saturday 28 March 2026 05:30:58 +0000 (0:00:00.839) 0:17:04.562 ******** 2026-03-28 05:31:14.861713 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.861724 | orchestrator | 2026-03-28 05:31:14.861735 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:31:14.861746 | orchestrator | Saturday 28 March 2026 05:30:59 +0000 (0:00:01.710) 0:17:06.273 ******** 2026-03-28 05:31:14.861757 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.861769 | orchestrator | 2026-03-28 05:31:14.861789 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:31:14.861808 | orchestrator | Saturday 28 March 2026 05:31:01 +0000 (0:00:02.105) 0:17:08.379 ******** 2026-03-28 05:31:14.861826 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-28 05:31:14.861856 | orchestrator | 2026-03-28 05:31:14.861874 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:31:14.861920 | orchestrator | Saturday 28 March 2026 05:31:03 +0000 (0:00:01.165) 0:17:09.544 ******** 2026-03-28 05:31:14.861937 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.861955 | orchestrator | 2026-03-28 05:31:14.861973 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:31:14.861991 | orchestrator | Saturday 28 March 2026 05:31:04 +0000 (0:00:01.189) 0:17:10.734 ******** 2026-03-28 05:31:14.862010 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.862107 | orchestrator | 2026-03-28 05:31:14.862125 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:31:14.862145 | orchestrator | Saturday 28 March 2026 05:31:05 +0000 (0:00:01.159) 0:17:11.894 ******** 2026-03-28 05:31:14.862175 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:31:14.862193 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:31:14.862212 | orchestrator | 2026-03-28 05:31:14.862231 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:31:14.862250 | orchestrator | Saturday 28 March 2026 05:31:07 +0000 (0:00:01.897) 0:17:13.792 ******** 2026-03-28 05:31:14.862269 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.862288 | orchestrator | 2026-03-28 05:31:14.862306 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:31:14.862325 | orchestrator | Saturday 28 March 2026 05:31:08 +0000 (0:00:01.516) 0:17:15.309 ******** 2026-03-28 05:31:14.862345 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.862363 | orchestrator | 2026-03-28 05:31:14.862381 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:31:14.862398 | orchestrator | Saturday 28 March 2026 05:31:10 +0000 (0:00:01.285) 0:17:16.595 ******** 2026-03-28 05:31:14.862409 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.862420 | orchestrator | 2026-03-28 05:31:14.862430 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:31:14.862441 | orchestrator | Saturday 28 March 2026 05:31:10 +0000 (0:00:00.804) 0:17:17.400 ******** 2026-03-28 05:31:14.862452 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:14.862463 | orchestrator | 2026-03-28 05:31:14.862474 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:31:14.862484 | orchestrator | Saturday 28 March 2026 05:31:11 +0000 (0:00:00.794) 0:17:18.195 ******** 2026-03-28 05:31:14.862495 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-28 05:31:14.862506 | orchestrator | 2026-03-28 05:31:14.862517 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:31:14.862528 | orchestrator | Saturday 28 March 2026 05:31:12 +0000 (0:00:01.134) 0:17:19.329 ******** 2026-03-28 05:31:14.862538 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:14.862549 | orchestrator | 2026-03-28 05:31:14.862569 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:31:14.862595 | orchestrator | Saturday 28 March 2026 05:31:14 +0000 (0:00:01.950) 0:17:21.280 ******** 2026-03-28 05:31:55.827407 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:31:55.827507 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:31:55.827520 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:31:55.827530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827539 | orchestrator | 2026-03-28 05:31:55.827548 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:31:55.827556 | orchestrator | Saturday 28 March 2026 05:31:16 +0000 (0:00:01.189) 0:17:22.469 ******** 2026-03-28 05:31:55.827564 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827592 | orchestrator | 2026-03-28 05:31:55.827600 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:31:55.827608 | orchestrator | Saturday 28 March 2026 05:31:17 +0000 (0:00:01.227) 0:17:23.697 ******** 2026-03-28 05:31:55.827616 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827624 | orchestrator | 2026-03-28 05:31:55.827632 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:31:55.827640 | orchestrator | Saturday 28 March 2026 05:31:18 +0000 (0:00:01.153) 0:17:24.851 ******** 2026-03-28 05:31:55.827647 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827655 | orchestrator | 2026-03-28 05:31:55.827663 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:31:55.827671 | orchestrator | Saturday 28 March 2026 05:31:19 +0000 (0:00:01.157) 0:17:26.008 ******** 2026-03-28 05:31:55.827678 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827686 | orchestrator | 2026-03-28 05:31:55.827694 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:31:55.827702 | orchestrator | Saturday 28 March 2026 05:31:20 +0000 (0:00:01.244) 0:17:27.253 ******** 2026-03-28 05:31:55.827710 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827717 | orchestrator | 2026-03-28 05:31:55.827725 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:31:55.827733 | orchestrator | Saturday 28 March 2026 05:31:21 +0000 (0:00:00.852) 0:17:28.106 ******** 2026-03-28 05:31:55.827741 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:55.827750 | orchestrator | 2026-03-28 05:31:55.827758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:31:55.827766 | orchestrator | Saturday 28 March 2026 05:31:23 +0000 (0:00:02.289) 0:17:30.396 ******** 2026-03-28 05:31:55.827774 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:55.827781 | orchestrator | 2026-03-28 05:31:55.827789 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:31:55.827797 | orchestrator | Saturday 28 March 2026 05:31:24 +0000 (0:00:00.839) 0:17:31.235 ******** 2026-03-28 05:31:55.827805 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-28 05:31:55.827813 | orchestrator | 2026-03-28 05:31:55.827821 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:31:55.827829 | orchestrator | Saturday 28 March 2026 05:31:25 +0000 (0:00:01.145) 0:17:32.381 ******** 2026-03-28 05:31:55.827837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827888 | orchestrator | 2026-03-28 05:31:55.827896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:31:55.827904 | orchestrator | Saturday 28 March 2026 05:31:27 +0000 (0:00:01.202) 0:17:33.584 ******** 2026-03-28 05:31:55.827912 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827919 | orchestrator | 2026-03-28 05:31:55.827927 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:31:55.827935 | orchestrator | Saturday 28 March 2026 05:31:28 +0000 (0:00:01.154) 0:17:34.738 ******** 2026-03-28 05:31:55.827943 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827950 | orchestrator | 2026-03-28 05:31:55.827959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:31:55.827968 | orchestrator | Saturday 28 March 2026 05:31:29 +0000 (0:00:01.353) 0:17:36.092 ******** 2026-03-28 05:31:55.827977 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.827986 | orchestrator | 2026-03-28 05:31:55.827995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:31:55.828004 | orchestrator | Saturday 28 March 2026 05:31:30 +0000 (0:00:01.165) 0:17:37.258 ******** 2026-03-28 05:31:55.828013 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828022 | orchestrator | 2026-03-28 05:31:55.828031 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:31:55.828040 | orchestrator | Saturday 28 March 2026 05:31:31 +0000 (0:00:01.141) 0:17:38.399 ******** 2026-03-28 05:31:55.828056 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828065 | orchestrator | 2026-03-28 05:31:55.828074 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:31:55.828083 | orchestrator | Saturday 28 March 2026 05:31:33 +0000 (0:00:01.157) 0:17:39.556 ******** 2026-03-28 05:31:55.828092 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828101 | orchestrator | 2026-03-28 05:31:55.828110 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:31:55.828119 | orchestrator | Saturday 28 March 2026 05:31:34 +0000 (0:00:01.204) 0:17:40.760 ******** 2026-03-28 05:31:55.828128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828137 | orchestrator | 2026-03-28 05:31:55.828156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:31:55.828165 | orchestrator | Saturday 28 March 2026 05:31:35 +0000 (0:00:01.170) 0:17:41.931 ******** 2026-03-28 05:31:55.828174 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:31:55.828183 | orchestrator | 2026-03-28 05:31:55.828191 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:31:55.828201 | orchestrator | Saturday 28 March 2026 05:31:36 +0000 (0:00:00.890) 0:17:42.822 ******** 2026-03-28 05:31:55.828222 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-28 05:31:55.828232 | orchestrator | 2026-03-28 05:31:55.828241 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:31:55.828265 | orchestrator | Saturday 28 March 2026 05:31:37 +0000 (0:00:01.198) 0:17:44.020 ******** 2026-03-28 05:31:55.828275 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 05:31:55.828284 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 05:31:55.828294 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 05:31:55.828303 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 05:31:55.828313 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 05:31:55.828322 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 05:31:55.828331 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 05:31:55.828339 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:31:55.828347 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:31:55.828355 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:31:55.828363 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:31:55.828371 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:31:55.828379 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:31:55.828387 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:31:55.828395 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 05:31:55.828403 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 05:31:55.828411 | orchestrator | 2026-03-28 05:31:55.828418 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:31:55.828426 | orchestrator | Saturday 28 March 2026 05:31:44 +0000 (0:00:06.477) 0:17:50.498 ******** 2026-03-28 05:31:55.828434 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828442 | orchestrator | 2026-03-28 05:31:55.828449 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:31:55.828457 | orchestrator | Saturday 28 March 2026 05:31:44 +0000 (0:00:00.896) 0:17:51.394 ******** 2026-03-28 05:31:55.828465 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828473 | orchestrator | 2026-03-28 05:31:55.828481 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:31:55.828488 | orchestrator | Saturday 28 March 2026 05:31:45 +0000 (0:00:00.853) 0:17:52.248 ******** 2026-03-28 05:31:55.828496 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828510 | orchestrator | 2026-03-28 05:31:55.828518 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:31:55.828526 | orchestrator | Saturday 28 March 2026 05:31:46 +0000 (0:00:00.966) 0:17:53.214 ******** 2026-03-28 05:31:55.828534 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828541 | orchestrator | 2026-03-28 05:31:55.828549 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:31:55.828557 | orchestrator | Saturday 28 March 2026 05:31:47 +0000 (0:00:00.822) 0:17:54.036 ******** 2026-03-28 05:31:55.828565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828572 | orchestrator | 2026-03-28 05:31:55.828580 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:31:55.828588 | orchestrator | Saturday 28 March 2026 05:31:48 +0000 (0:00:00.791) 0:17:54.828 ******** 2026-03-28 05:31:55.828596 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828603 | orchestrator | 2026-03-28 05:31:55.828611 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:31:55.828619 | orchestrator | Saturday 28 March 2026 05:31:49 +0000 (0:00:00.841) 0:17:55.669 ******** 2026-03-28 05:31:55.828627 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828635 | orchestrator | 2026-03-28 05:31:55.828643 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:31:55.828650 | orchestrator | Saturday 28 March 2026 05:31:50 +0000 (0:00:00.806) 0:17:56.475 ******** 2026-03-28 05:31:55.828658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828666 | orchestrator | 2026-03-28 05:31:55.828674 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:31:55.828682 | orchestrator | Saturday 28 March 2026 05:31:50 +0000 (0:00:00.878) 0:17:57.353 ******** 2026-03-28 05:31:55.828690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828698 | orchestrator | 2026-03-28 05:31:55.828705 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:31:55.828713 | orchestrator | Saturday 28 March 2026 05:31:51 +0000 (0:00:00.812) 0:17:58.166 ******** 2026-03-28 05:31:55.828721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828729 | orchestrator | 2026-03-28 05:31:55.828736 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:31:55.828744 | orchestrator | Saturday 28 March 2026 05:31:52 +0000 (0:00:00.823) 0:17:58.990 ******** 2026-03-28 05:31:55.828752 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828760 | orchestrator | 2026-03-28 05:31:55.828767 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:31:55.828775 | orchestrator | Saturday 28 March 2026 05:31:53 +0000 (0:00:00.768) 0:17:59.758 ******** 2026-03-28 05:31:55.828783 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828791 | orchestrator | 2026-03-28 05:31:55.828799 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:31:55.828806 | orchestrator | Saturday 28 March 2026 05:31:54 +0000 (0:00:00.794) 0:18:00.552 ******** 2026-03-28 05:31:55.828814 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828822 | orchestrator | 2026-03-28 05:31:55.828829 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:31:55.828877 | orchestrator | Saturday 28 March 2026 05:31:54 +0000 (0:00:00.874) 0:18:01.427 ******** 2026-03-28 05:31:55.828887 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:31:55.828895 | orchestrator | 2026-03-28 05:31:55.828902 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:31:55.828916 | orchestrator | Saturday 28 March 2026 05:31:55 +0000 (0:00:00.816) 0:18:02.244 ******** 2026-03-28 05:32:45.225075 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225188 | orchestrator | 2026-03-28 05:32:45.225205 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:32:45.225219 | orchestrator | Saturday 28 March 2026 05:31:56 +0000 (0:00:00.871) 0:18:03.115 ******** 2026-03-28 05:32:45.225256 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225268 | orchestrator | 2026-03-28 05:32:45.225279 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:32:45.225290 | orchestrator | Saturday 28 March 2026 05:31:57 +0000 (0:00:00.858) 0:18:03.974 ******** 2026-03-28 05:32:45.225301 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225312 | orchestrator | 2026-03-28 05:32:45.225324 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:32:45.225337 | orchestrator | Saturday 28 March 2026 05:31:58 +0000 (0:00:00.903) 0:18:04.878 ******** 2026-03-28 05:32:45.225348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225359 | orchestrator | 2026-03-28 05:32:45.225370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:32:45.225381 | orchestrator | Saturday 28 March 2026 05:31:59 +0000 (0:00:00.800) 0:18:05.678 ******** 2026-03-28 05:32:45.225392 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225402 | orchestrator | 2026-03-28 05:32:45.225413 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:32:45.225424 | orchestrator | Saturday 28 March 2026 05:32:00 +0000 (0:00:00.809) 0:18:06.487 ******** 2026-03-28 05:32:45.225435 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225446 | orchestrator | 2026-03-28 05:32:45.225456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:32:45.225467 | orchestrator | Saturday 28 March 2026 05:32:00 +0000 (0:00:00.884) 0:18:07.372 ******** 2026-03-28 05:32:45.225478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225489 | orchestrator | 2026-03-28 05:32:45.225500 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:32:45.225511 | orchestrator | Saturday 28 March 2026 05:32:01 +0000 (0:00:00.801) 0:18:08.173 ******** 2026-03-28 05:32:45.225522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:32:45.225533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:32:45.225544 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:32:45.225555 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225566 | orchestrator | 2026-03-28 05:32:45.225576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:32:45.225587 | orchestrator | Saturday 28 March 2026 05:32:02 +0000 (0:00:01.090) 0:18:09.263 ******** 2026-03-28 05:32:45.225598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:32:45.225609 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:32:45.225622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:32:45.225634 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225647 | orchestrator | 2026-03-28 05:32:45.225659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:32:45.225672 | orchestrator | Saturday 28 March 2026 05:32:03 +0000 (0:00:01.075) 0:18:10.338 ******** 2026-03-28 05:32:45.225684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:32:45.225696 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:32:45.225708 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:32:45.225721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225733 | orchestrator | 2026-03-28 05:32:45.225745 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:32:45.225758 | orchestrator | Saturday 28 March 2026 05:32:04 +0000 (0:00:01.071) 0:18:11.410 ******** 2026-03-28 05:32:45.225770 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225782 | orchestrator | 2026-03-28 05:32:45.225827 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:32:45.225847 | orchestrator | Saturday 28 March 2026 05:32:05 +0000 (0:00:00.782) 0:18:12.192 ******** 2026-03-28 05:32:45.225884 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 05:32:45.225905 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.225925 | orchestrator | 2026-03-28 05:32:45.225938 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:32:45.225951 | orchestrator | Saturday 28 March 2026 05:32:06 +0000 (0:00:00.887) 0:18:13.080 ******** 2026-03-28 05:32:45.225964 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:32:45.225976 | orchestrator | 2026-03-28 05:32:45.225987 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:32:45.225998 | orchestrator | Saturday 28 March 2026 05:32:08 +0000 (0:00:01.429) 0:18:14.509 ******** 2026-03-28 05:32:45.226009 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226083 | orchestrator | 2026-03-28 05:32:45.226096 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 05:32:45.226107 | orchestrator | Saturday 28 March 2026 05:32:09 +0000 (0:00:00.957) 0:18:15.467 ******** 2026-03-28 05:32:45.226118 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-28 05:32:45.226130 | orchestrator | 2026-03-28 05:32:45.226140 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 05:32:45.226151 | orchestrator | Saturday 28 March 2026 05:32:10 +0000 (0:00:01.298) 0:18:16.765 ******** 2026-03-28 05:32:45.226162 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226172 | orchestrator | 2026-03-28 05:32:45.226183 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 05:32:45.226210 | orchestrator | Saturday 28 March 2026 05:32:13 +0000 (0:00:03.234) 0:18:20.000 ******** 2026-03-28 05:32:45.226221 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.226232 | orchestrator | 2026-03-28 05:32:45.226243 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 05:32:45.226272 | orchestrator | Saturday 28 March 2026 05:32:14 +0000 (0:00:01.234) 0:18:21.234 ******** 2026-03-28 05:32:45.226284 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226295 | orchestrator | 2026-03-28 05:32:45.226306 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 05:32:45.226316 | orchestrator | Saturday 28 March 2026 05:32:16 +0000 (0:00:01.210) 0:18:22.445 ******** 2026-03-28 05:32:45.226327 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226338 | orchestrator | 2026-03-28 05:32:45.226349 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 05:32:45.226360 | orchestrator | Saturday 28 March 2026 05:32:17 +0000 (0:00:01.212) 0:18:23.658 ******** 2026-03-28 05:32:45.226371 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:32:45.226382 | orchestrator | 2026-03-28 05:32:45.226393 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 05:32:45.226404 | orchestrator | Saturday 28 March 2026 05:32:19 +0000 (0:00:02.129) 0:18:25.787 ******** 2026-03-28 05:32:45.226415 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226426 | orchestrator | 2026-03-28 05:32:45.226437 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 05:32:45.226448 | orchestrator | Saturday 28 March 2026 05:32:21 +0000 (0:00:01.661) 0:18:27.448 ******** 2026-03-28 05:32:45.226459 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226470 | orchestrator | 2026-03-28 05:32:45.226481 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 05:32:45.226492 | orchestrator | Saturday 28 March 2026 05:32:22 +0000 (0:00:01.555) 0:18:29.004 ******** 2026-03-28 05:32:45.226503 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226513 | orchestrator | 2026-03-28 05:32:45.226525 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 05:32:45.226535 | orchestrator | Saturday 28 March 2026 05:32:24 +0000 (0:00:01.551) 0:18:30.556 ******** 2026-03-28 05:32:45.226546 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:32:45.226557 | orchestrator | 2026-03-28 05:32:45.226568 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 05:32:45.226588 | orchestrator | Saturday 28 March 2026 05:32:25 +0000 (0:00:01.582) 0:18:32.139 ******** 2026-03-28 05:32:45.226599 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:32:45.226610 | orchestrator | 2026-03-28 05:32:45.226621 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 05:32:45.226632 | orchestrator | Saturday 28 March 2026 05:32:27 +0000 (0:00:01.687) 0:18:33.827 ******** 2026-03-28 05:32:45.226643 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:32:45.226654 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 05:32:45.226665 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 05:32:45.226676 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-28 05:32:45.226687 | orchestrator | 2026-03-28 05:32:45.226698 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 05:32:45.226709 | orchestrator | Saturday 28 March 2026 05:32:31 +0000 (0:00:04.342) 0:18:38.170 ******** 2026-03-28 05:32:45.226719 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:32:45.226730 | orchestrator | 2026-03-28 05:32:45.226741 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 05:32:45.226752 | orchestrator | Saturday 28 March 2026 05:32:33 +0000 (0:00:02.118) 0:18:40.288 ******** 2026-03-28 05:32:45.226763 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226774 | orchestrator | 2026-03-28 05:32:45.226785 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 05:32:45.226823 | orchestrator | Saturday 28 March 2026 05:32:34 +0000 (0:00:01.143) 0:18:41.432 ******** 2026-03-28 05:32:45.226843 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226863 | orchestrator | 2026-03-28 05:32:45.226881 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 05:32:45.226896 | orchestrator | Saturday 28 March 2026 05:32:36 +0000 (0:00:01.195) 0:18:42.627 ******** 2026-03-28 05:32:45.226907 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226918 | orchestrator | 2026-03-28 05:32:45.226929 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 05:32:45.226940 | orchestrator | Saturday 28 March 2026 05:32:37 +0000 (0:00:01.775) 0:18:44.403 ******** 2026-03-28 05:32:45.226950 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:32:45.226961 | orchestrator | 2026-03-28 05:32:45.226972 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 05:32:45.226983 | orchestrator | Saturday 28 March 2026 05:32:39 +0000 (0:00:01.592) 0:18:45.995 ******** 2026-03-28 05:32:45.226994 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.227005 | orchestrator | 2026-03-28 05:32:45.227016 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 05:32:45.227027 | orchestrator | Saturday 28 March 2026 05:32:40 +0000 (0:00:00.811) 0:18:46.807 ******** 2026-03-28 05:32:45.227037 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-28 05:32:45.227048 | orchestrator | 2026-03-28 05:32:45.227059 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 05:32:45.227070 | orchestrator | Saturday 28 March 2026 05:32:41 +0000 (0:00:01.192) 0:18:47.999 ******** 2026-03-28 05:32:45.227081 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.227091 | orchestrator | 2026-03-28 05:32:45.227102 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 05:32:45.227113 | orchestrator | Saturday 28 March 2026 05:32:42 +0000 (0:00:01.206) 0:18:49.206 ******** 2026-03-28 05:32:45.227124 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:32:45.227135 | orchestrator | 2026-03-28 05:32:45.227146 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 05:32:45.227163 | orchestrator | Saturday 28 March 2026 05:32:44 +0000 (0:00:01.252) 0:18:50.458 ******** 2026-03-28 05:32:45.227174 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-28 05:32:45.227193 | orchestrator | 2026-03-28 05:32:45.227204 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 05:32:45.227223 | orchestrator | Saturday 28 March 2026 05:32:45 +0000 (0:00:01.175) 0:18:51.634 ******** 2026-03-28 05:33:54.889542 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:33:54.889661 | orchestrator | 2026-03-28 05:33:54.889678 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 05:33:54.889692 | orchestrator | Saturday 28 March 2026 05:32:48 +0000 (0:00:03.001) 0:18:54.635 ******** 2026-03-28 05:33:54.889703 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.889715 | orchestrator | 2026-03-28 05:33:54.889726 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 05:33:54.889787 | orchestrator | Saturday 28 March 2026 05:32:50 +0000 (0:00:02.038) 0:18:56.674 ******** 2026-03-28 05:33:54.889798 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.889810 | orchestrator | 2026-03-28 05:33:54.889821 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 05:33:54.889833 | orchestrator | Saturday 28 March 2026 05:32:52 +0000 (0:00:02.442) 0:18:59.117 ******** 2026-03-28 05:33:54.889844 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:33:54.889855 | orchestrator | 2026-03-28 05:33:54.889866 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 05:33:54.889877 | orchestrator | Saturday 28 March 2026 05:32:55 +0000 (0:00:03.049) 0:19:02.166 ******** 2026-03-28 05:33:54.889889 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-28 05:33:54.889901 | orchestrator | 2026-03-28 05:33:54.889912 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 05:33:54.889922 | orchestrator | Saturday 28 March 2026 05:32:56 +0000 (0:00:01.179) 0:19:03.346 ******** 2026-03-28 05:33:54.889934 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-28 05:33:54.889945 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.889957 | orchestrator | 2026-03-28 05:33:54.889968 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 05:33:54.889980 | orchestrator | Saturday 28 March 2026 05:33:19 +0000 (0:00:22.975) 0:19:26.321 ******** 2026-03-28 05:33:54.889991 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.890002 | orchestrator | 2026-03-28 05:33:54.890093 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 05:33:54.890111 | orchestrator | Saturday 28 March 2026 05:33:22 +0000 (0:00:02.645) 0:19:28.967 ******** 2026-03-28 05:33:54.890123 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890136 | orchestrator | 2026-03-28 05:33:54.890149 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 05:33:54.890162 | orchestrator | Saturday 28 March 2026 05:33:23 +0000 (0:00:00.805) 0:19:29.772 ******** 2026-03-28 05:33:54.890178 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:33:54.890204 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 05:33:54.890217 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 05:33:54.890257 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 05:33:54.890272 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 05:33:54.890300 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__11d449ed0eb571597d487613c726503b742297fa'}])  2026-03-28 05:33:54.890315 | orchestrator | 2026-03-28 05:33:54.890345 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-28 05:33:54.890359 | orchestrator | Saturday 28 March 2026 05:33:32 +0000 (0:00:09.490) 0:19:39.263 ******** 2026-03-28 05:33:54.890371 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:33:54.890385 | orchestrator | 2026-03-28 05:33:54.890398 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:33:54.890411 | orchestrator | Saturday 28 March 2026 05:33:35 +0000 (0:00:02.244) 0:19:41.508 ******** 2026-03-28 05:33:54.890423 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:33:54.890436 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-28 05:33:54.890447 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-28 05:33:54.890458 | orchestrator | 2026-03-28 05:33:54.890469 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:33:54.890480 | orchestrator | Saturday 28 March 2026 05:33:37 +0000 (0:00:02.048) 0:19:43.556 ******** 2026-03-28 05:33:54.890491 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:33:54.890502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:33:54.890513 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:33:54.890524 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890535 | orchestrator | 2026-03-28 05:33:54.890546 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-28 05:33:54.890558 | orchestrator | Saturday 28 March 2026 05:33:38 +0000 (0:00:01.744) 0:19:45.300 ******** 2026-03-28 05:33:54.890568 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890579 | orchestrator | 2026-03-28 05:33:54.890591 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-28 05:33:54.890602 | orchestrator | Saturday 28 March 2026 05:33:39 +0000 (0:00:00.790) 0:19:46.091 ******** 2026-03-28 05:33:54.890613 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.890624 | orchestrator | 2026-03-28 05:33:54.890634 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 05:33:54.890645 | orchestrator | Saturday 28 March 2026 05:33:41 +0000 (0:00:01.978) 0:19:48.070 ******** 2026-03-28 05:33:54.890656 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890667 | orchestrator | 2026-03-28 05:33:54.890678 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 05:33:54.890689 | orchestrator | Saturday 28 March 2026 05:33:42 +0000 (0:00:00.806) 0:19:48.876 ******** 2026-03-28 05:33:54.890709 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890720 | orchestrator | 2026-03-28 05:33:54.890765 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 05:33:54.890777 | orchestrator | Saturday 28 March 2026 05:33:43 +0000 (0:00:00.782) 0:19:49.659 ******** 2026-03-28 05:33:54.890788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890799 | orchestrator | 2026-03-28 05:33:54.890810 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 05:33:54.890821 | orchestrator | Saturday 28 March 2026 05:33:44 +0000 (0:00:00.839) 0:19:50.498 ******** 2026-03-28 05:33:54.890832 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890843 | orchestrator | 2026-03-28 05:33:54.890853 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 05:33:54.890864 | orchestrator | Saturday 28 March 2026 05:33:44 +0000 (0:00:00.814) 0:19:51.313 ******** 2026-03-28 05:33:54.890875 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890886 | orchestrator | 2026-03-28 05:33:54.890897 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 05:33:54.890908 | orchestrator | Saturday 28 March 2026 05:33:45 +0000 (0:00:00.810) 0:19:52.124 ******** 2026-03-28 05:33:54.890919 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890929 | orchestrator | 2026-03-28 05:33:54.890940 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 05:33:54.890951 | orchestrator | Saturday 28 March 2026 05:33:46 +0000 (0:00:00.802) 0:19:52.927 ******** 2026-03-28 05:33:54.890962 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:33:54.890973 | orchestrator | 2026-03-28 05:33:54.890984 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-28 05:33:54.890995 | orchestrator | 2026-03-28 05:33:54.891006 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-28 05:33:54.891017 | orchestrator | Saturday 28 March 2026 05:33:48 +0000 (0:00:01.871) 0:19:54.799 ******** 2026-03-28 05:33:54.891028 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:33:54.891038 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:33:54.891049 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:33:54.891060 | orchestrator | 2026-03-28 05:33:54.891071 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-28 05:33:54.891082 | orchestrator | 2026-03-28 05:33:54.891093 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:33:54.891104 | orchestrator | Saturday 28 March 2026 05:33:49 +0000 (0:00:01.632) 0:19:56.432 ******** 2026-03-28 05:33:54.891115 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:33:54.891126 | orchestrator | 2026-03-28 05:33:54.891137 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:33:54.891148 | orchestrator | Saturday 28 March 2026 05:33:51 +0000 (0:00:01.257) 0:19:57.690 ******** 2026-03-28 05:33:54.891159 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:33:54.891170 | orchestrator | 2026-03-28 05:33:54.891181 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:33:54.891192 | orchestrator | Saturday 28 March 2026 05:33:52 +0000 (0:00:01.163) 0:19:58.853 ******** 2026-03-28 05:33:54.891209 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:33:54.891220 | orchestrator | 2026-03-28 05:33:54.891231 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:33:54.891242 | orchestrator | Saturday 28 March 2026 05:33:53 +0000 (0:00:01.215) 0:20:00.069 ******** 2026-03-28 05:33:54.891253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:33:54.891264 | orchestrator | 2026-03-28 05:33:54.891281 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:34:41.834184 | orchestrator | Saturday 28 March 2026 05:33:54 +0000 (0:00:01.234) 0:20:01.304 ******** 2026-03-28 05:34:41.834305 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834325 | orchestrator | 2026-03-28 05:34:41.834338 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:34:41.834375 | orchestrator | Saturday 28 March 2026 05:33:56 +0000 (0:00:01.167) 0:20:02.471 ******** 2026-03-28 05:34:41.834388 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834399 | orchestrator | 2026-03-28 05:34:41.834410 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:34:41.834421 | orchestrator | Saturday 28 March 2026 05:33:57 +0000 (0:00:01.185) 0:20:03.657 ******** 2026-03-28 05:34:41.834432 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834443 | orchestrator | 2026-03-28 05:34:41.834454 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:34:41.834465 | orchestrator | Saturday 28 March 2026 05:33:58 +0000 (0:00:01.148) 0:20:04.805 ******** 2026-03-28 05:34:41.834476 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834486 | orchestrator | 2026-03-28 05:34:41.834497 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:34:41.834508 | orchestrator | Saturday 28 March 2026 05:33:59 +0000 (0:00:01.204) 0:20:06.010 ******** 2026-03-28 05:34:41.834519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834530 | orchestrator | 2026-03-28 05:34:41.834541 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:34:41.834552 | orchestrator | Saturday 28 March 2026 05:34:00 +0000 (0:00:01.182) 0:20:07.192 ******** 2026-03-28 05:34:41.834563 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834573 | orchestrator | 2026-03-28 05:34:41.834584 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:34:41.834595 | orchestrator | Saturday 28 March 2026 05:34:01 +0000 (0:00:01.167) 0:20:08.359 ******** 2026-03-28 05:34:41.834606 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834618 | orchestrator | 2026-03-28 05:34:41.834637 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:34:41.834655 | orchestrator | Saturday 28 March 2026 05:34:03 +0000 (0:00:01.132) 0:20:09.492 ******** 2026-03-28 05:34:41.834673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834692 | orchestrator | 2026-03-28 05:34:41.834741 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:34:41.834759 | orchestrator | Saturday 28 March 2026 05:34:04 +0000 (0:00:01.117) 0:20:10.610 ******** 2026-03-28 05:34:41.834777 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834795 | orchestrator | 2026-03-28 05:34:41.834812 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:34:41.834830 | orchestrator | Saturday 28 March 2026 05:34:05 +0000 (0:00:01.152) 0:20:11.762 ******** 2026-03-28 05:34:41.834849 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834867 | orchestrator | 2026-03-28 05:34:41.834886 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:34:41.834906 | orchestrator | Saturday 28 March 2026 05:34:06 +0000 (0:00:01.171) 0:20:12.933 ******** 2026-03-28 05:34:41.834926 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.834945 | orchestrator | 2026-03-28 05:34:41.834965 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:34:41.834984 | orchestrator | Saturday 28 March 2026 05:34:07 +0000 (0:00:01.251) 0:20:14.184 ******** 2026-03-28 05:34:41.835002 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835022 | orchestrator | 2026-03-28 05:34:41.835041 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:34:41.835059 | orchestrator | Saturday 28 March 2026 05:34:08 +0000 (0:00:01.194) 0:20:15.379 ******** 2026-03-28 05:34:41.835078 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835097 | orchestrator | 2026-03-28 05:34:41.835115 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:34:41.835133 | orchestrator | Saturday 28 March 2026 05:34:10 +0000 (0:00:01.144) 0:20:16.524 ******** 2026-03-28 05:34:41.835151 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835169 | orchestrator | 2026-03-28 05:34:41.835187 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:34:41.835223 | orchestrator | Saturday 28 March 2026 05:34:11 +0000 (0:00:01.140) 0:20:17.664 ******** 2026-03-28 05:34:41.835242 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835260 | orchestrator | 2026-03-28 05:34:41.835278 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:34:41.835298 | orchestrator | Saturday 28 March 2026 05:34:12 +0000 (0:00:01.195) 0:20:18.860 ******** 2026-03-28 05:34:41.835316 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835361 | orchestrator | 2026-03-28 05:34:41.835383 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:34:41.835401 | orchestrator | Saturday 28 March 2026 05:34:13 +0000 (0:00:01.289) 0:20:20.149 ******** 2026-03-28 05:34:41.835420 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835439 | orchestrator | 2026-03-28 05:34:41.835458 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:34:41.835476 | orchestrator | Saturday 28 March 2026 05:34:14 +0000 (0:00:01.168) 0:20:21.318 ******** 2026-03-28 05:34:41.835495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835514 | orchestrator | 2026-03-28 05:34:41.835534 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:34:41.835554 | orchestrator | Saturday 28 March 2026 05:34:16 +0000 (0:00:01.176) 0:20:22.495 ******** 2026-03-28 05:34:41.835573 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835591 | orchestrator | 2026-03-28 05:34:41.835631 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:34:41.835650 | orchestrator | Saturday 28 March 2026 05:34:17 +0000 (0:00:01.144) 0:20:23.640 ******** 2026-03-28 05:34:41.835670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835689 | orchestrator | 2026-03-28 05:34:41.835755 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:34:41.835801 | orchestrator | Saturday 28 March 2026 05:34:18 +0000 (0:00:01.207) 0:20:24.847 ******** 2026-03-28 05:34:41.835822 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835839 | orchestrator | 2026-03-28 05:34:41.835859 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:34:41.835878 | orchestrator | Saturday 28 March 2026 05:34:19 +0000 (0:00:01.179) 0:20:26.027 ******** 2026-03-28 05:34:41.835898 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835910 | orchestrator | 2026-03-28 05:34:41.835921 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:34:41.835932 | orchestrator | Saturday 28 March 2026 05:34:20 +0000 (0:00:01.196) 0:20:27.224 ******** 2026-03-28 05:34:41.835943 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835953 | orchestrator | 2026-03-28 05:34:41.835964 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:34:41.835975 | orchestrator | Saturday 28 March 2026 05:34:21 +0000 (0:00:01.146) 0:20:28.370 ******** 2026-03-28 05:34:41.835986 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.835997 | orchestrator | 2026-03-28 05:34:41.836007 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:34:41.836018 | orchestrator | Saturday 28 March 2026 05:34:23 +0000 (0:00:01.169) 0:20:29.539 ******** 2026-03-28 05:34:41.836029 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836039 | orchestrator | 2026-03-28 05:34:41.836050 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:34:41.836061 | orchestrator | Saturday 28 March 2026 05:34:24 +0000 (0:00:01.171) 0:20:30.711 ******** 2026-03-28 05:34:41.836071 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836082 | orchestrator | 2026-03-28 05:34:41.836093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:34:41.836104 | orchestrator | Saturday 28 March 2026 05:34:25 +0000 (0:00:01.196) 0:20:31.907 ******** 2026-03-28 05:34:41.836115 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836125 | orchestrator | 2026-03-28 05:34:41.836136 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:34:41.836158 | orchestrator | Saturday 28 March 2026 05:34:26 +0000 (0:00:01.149) 0:20:33.057 ******** 2026-03-28 05:34:41.836169 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836180 | orchestrator | 2026-03-28 05:34:41.836191 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:34:41.836201 | orchestrator | Saturday 28 March 2026 05:34:27 +0000 (0:00:01.339) 0:20:34.397 ******** 2026-03-28 05:34:41.836212 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836223 | orchestrator | 2026-03-28 05:34:41.836234 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:34:41.836244 | orchestrator | Saturday 28 March 2026 05:34:29 +0000 (0:00:01.177) 0:20:35.575 ******** 2026-03-28 05:34:41.836255 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836266 | orchestrator | 2026-03-28 05:34:41.836277 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:34:41.836288 | orchestrator | Saturday 28 March 2026 05:34:30 +0000 (0:00:01.190) 0:20:36.766 ******** 2026-03-28 05:34:41.836298 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836309 | orchestrator | 2026-03-28 05:34:41.836320 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:34:41.836331 | orchestrator | Saturday 28 March 2026 05:34:31 +0000 (0:00:01.126) 0:20:37.893 ******** 2026-03-28 05:34:41.836341 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836352 | orchestrator | 2026-03-28 05:34:41.836363 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:34:41.836374 | orchestrator | Saturday 28 March 2026 05:34:32 +0000 (0:00:01.125) 0:20:39.019 ******** 2026-03-28 05:34:41.836384 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836395 | orchestrator | 2026-03-28 05:34:41.836406 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:34:41.836417 | orchestrator | Saturday 28 March 2026 05:34:33 +0000 (0:00:01.133) 0:20:40.153 ******** 2026-03-28 05:34:41.836427 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836438 | orchestrator | 2026-03-28 05:34:41.836449 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:34:41.836459 | orchestrator | Saturday 28 March 2026 05:34:34 +0000 (0:00:01.157) 0:20:41.310 ******** 2026-03-28 05:34:41.836470 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836481 | orchestrator | 2026-03-28 05:34:41.836492 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:34:41.836504 | orchestrator | Saturday 28 March 2026 05:34:36 +0000 (0:00:01.152) 0:20:42.462 ******** 2026-03-28 05:34:41.836515 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836526 | orchestrator | 2026-03-28 05:34:41.836537 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:34:41.836548 | orchestrator | Saturday 28 March 2026 05:34:37 +0000 (0:00:01.168) 0:20:43.630 ******** 2026-03-28 05:34:41.836559 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836569 | orchestrator | 2026-03-28 05:34:41.836583 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:34:41.836602 | orchestrator | Saturday 28 March 2026 05:34:38 +0000 (0:00:01.164) 0:20:44.795 ******** 2026-03-28 05:34:41.836620 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836639 | orchestrator | 2026-03-28 05:34:41.836659 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:34:41.836678 | orchestrator | Saturday 28 March 2026 05:34:39 +0000 (0:00:01.143) 0:20:45.939 ******** 2026-03-28 05:34:41.836722 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836740 | orchestrator | 2026-03-28 05:34:41.836767 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:34:41.836779 | orchestrator | Saturday 28 March 2026 05:34:40 +0000 (0:00:01.185) 0:20:47.125 ******** 2026-03-28 05:34:41.836799 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:34:41.836810 | orchestrator | 2026-03-28 05:34:41.836821 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:34:41.836842 | orchestrator | Saturday 28 March 2026 05:34:41 +0000 (0:00:01.126) 0:20:48.252 ******** 2026-03-28 05:35:21.642129 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642273 | orchestrator | 2026-03-28 05:35:21.642297 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:35:21.642318 | orchestrator | Saturday 28 March 2026 05:34:43 +0000 (0:00:01.325) 0:20:49.578 ******** 2026-03-28 05:35:21.642334 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642350 | orchestrator | 2026-03-28 05:35:21.642367 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:35:21.642382 | orchestrator | Saturday 28 March 2026 05:34:44 +0000 (0:00:01.304) 0:20:50.882 ******** 2026-03-28 05:35:21.642399 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642416 | orchestrator | 2026-03-28 05:35:21.642433 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:35:21.642450 | orchestrator | Saturday 28 March 2026 05:34:45 +0000 (0:00:01.212) 0:20:52.095 ******** 2026-03-28 05:35:21.642467 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642482 | orchestrator | 2026-03-28 05:35:21.642499 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:35:21.642516 | orchestrator | Saturday 28 March 2026 05:34:46 +0000 (0:00:01.295) 0:20:53.390 ******** 2026-03-28 05:35:21.642532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642549 | orchestrator | 2026-03-28 05:35:21.642566 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:35:21.642582 | orchestrator | Saturday 28 March 2026 05:34:48 +0000 (0:00:01.226) 0:20:54.616 ******** 2026-03-28 05:35:21.642598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642617 | orchestrator | 2026-03-28 05:35:21.642636 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:35:21.642654 | orchestrator | Saturday 28 March 2026 05:34:49 +0000 (0:00:01.155) 0:20:55.772 ******** 2026-03-28 05:35:21.642714 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642731 | orchestrator | 2026-03-28 05:35:21.642747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:35:21.642764 | orchestrator | Saturday 28 March 2026 05:34:50 +0000 (0:00:01.174) 0:20:56.947 ******** 2026-03-28 05:35:21.642780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642796 | orchestrator | 2026-03-28 05:35:21.642813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:35:21.642833 | orchestrator | Saturday 28 March 2026 05:34:51 +0000 (0:00:01.188) 0:20:58.135 ******** 2026-03-28 05:35:21.642850 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642867 | orchestrator | 2026-03-28 05:35:21.642886 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:35:21.642903 | orchestrator | Saturday 28 March 2026 05:34:52 +0000 (0:00:01.145) 0:20:59.281 ******** 2026-03-28 05:35:21.642921 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.642937 | orchestrator | 2026-03-28 05:35:21.642953 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:35:21.642970 | orchestrator | Saturday 28 March 2026 05:34:53 +0000 (0:00:01.117) 0:21:00.399 ******** 2026-03-28 05:35:21.642986 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:35:21.643004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:35:21.643021 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:35:21.643036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643053 | orchestrator | 2026-03-28 05:35:21.643068 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:35:21.643085 | orchestrator | Saturday 28 March 2026 05:34:55 +0000 (0:00:01.515) 0:21:01.915 ******** 2026-03-28 05:35:21.643136 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:35:21.643153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:35:21.643170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:35:21.643187 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643203 | orchestrator | 2026-03-28 05:35:21.643219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:35:21.643235 | orchestrator | Saturday 28 March 2026 05:34:57 +0000 (0:00:01.940) 0:21:03.856 ******** 2026-03-28 05:35:21.643250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:35:21.643266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:35:21.643281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:35:21.643297 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643313 | orchestrator | 2026-03-28 05:35:21.643329 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:35:21.643345 | orchestrator | Saturday 28 March 2026 05:34:59 +0000 (0:00:02.394) 0:21:06.251 ******** 2026-03-28 05:35:21.643360 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643375 | orchestrator | 2026-03-28 05:35:21.643391 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:35:21.643407 | orchestrator | Saturday 28 March 2026 05:35:01 +0000 (0:00:01.325) 0:21:07.576 ******** 2026-03-28 05:35:21.643423 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 05:35:21.643438 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643455 | orchestrator | 2026-03-28 05:35:21.643471 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:35:21.643487 | orchestrator | Saturday 28 March 2026 05:35:02 +0000 (0:00:01.386) 0:21:08.962 ******** 2026-03-28 05:35:21.643502 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643518 | orchestrator | 2026-03-28 05:35:21.643551 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:35:21.643567 | orchestrator | Saturday 28 March 2026 05:35:03 +0000 (0:00:01.181) 0:21:10.144 ******** 2026-03-28 05:35:21.643583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:35:21.643600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:35:21.643616 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:35:21.643658 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643704 | orchestrator | 2026-03-28 05:35:21.643720 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:35:21.643737 | orchestrator | Saturday 28 March 2026 05:35:05 +0000 (0:00:01.505) 0:21:11.649 ******** 2026-03-28 05:35:21.643753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643769 | orchestrator | 2026-03-28 05:35:21.643785 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:35:21.643801 | orchestrator | Saturday 28 March 2026 05:35:06 +0000 (0:00:01.099) 0:21:12.749 ******** 2026-03-28 05:35:21.643924 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.643945 | orchestrator | 2026-03-28 05:35:21.643962 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:35:21.643977 | orchestrator | Saturday 28 March 2026 05:35:07 +0000 (0:00:01.106) 0:21:13.855 ******** 2026-03-28 05:35:21.643993 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.644009 | orchestrator | 2026-03-28 05:35:21.644026 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:35:21.644042 | orchestrator | Saturday 28 March 2026 05:35:08 +0000 (0:00:01.105) 0:21:14.961 ******** 2026-03-28 05:35:21.644057 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:35:21.644073 | orchestrator | 2026-03-28 05:35:21.644089 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-28 05:35:21.644106 | orchestrator | 2026-03-28 05:35:21.644122 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:35:21.644156 | orchestrator | Saturday 28 March 2026 05:35:09 +0000 (0:00:01.000) 0:21:15.962 ******** 2026-03-28 05:35:21.644173 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644190 | orchestrator | 2026-03-28 05:35:21.644205 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:35:21.644220 | orchestrator | Saturday 28 March 2026 05:35:10 +0000 (0:00:00.746) 0:21:16.708 ******** 2026-03-28 05:35:21.644236 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644252 | orchestrator | 2026-03-28 05:35:21.644268 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:35:21.644285 | orchestrator | Saturday 28 March 2026 05:35:11 +0000 (0:00:00.791) 0:21:17.500 ******** 2026-03-28 05:35:21.644301 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644318 | orchestrator | 2026-03-28 05:35:21.644334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:35:21.644350 | orchestrator | Saturday 28 March 2026 05:35:11 +0000 (0:00:00.745) 0:21:18.245 ******** 2026-03-28 05:35:21.644366 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644381 | orchestrator | 2026-03-28 05:35:21.644397 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:35:21.644413 | orchestrator | Saturday 28 March 2026 05:35:12 +0000 (0:00:00.783) 0:21:19.029 ******** 2026-03-28 05:35:21.644429 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644445 | orchestrator | 2026-03-28 05:35:21.644461 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:35:21.644476 | orchestrator | Saturday 28 March 2026 05:35:13 +0000 (0:00:00.791) 0:21:19.820 ******** 2026-03-28 05:35:21.644493 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644508 | orchestrator | 2026-03-28 05:35:21.644524 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:35:21.644540 | orchestrator | Saturday 28 March 2026 05:35:14 +0000 (0:00:00.806) 0:21:20.626 ******** 2026-03-28 05:35:21.644556 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644572 | orchestrator | 2026-03-28 05:35:21.644588 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:35:21.644605 | orchestrator | Saturday 28 March 2026 05:35:14 +0000 (0:00:00.798) 0:21:21.425 ******** 2026-03-28 05:35:21.644620 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644635 | orchestrator | 2026-03-28 05:35:21.644652 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:35:21.644728 | orchestrator | Saturday 28 March 2026 05:35:15 +0000 (0:00:00.806) 0:21:22.232 ******** 2026-03-28 05:35:21.644748 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644764 | orchestrator | 2026-03-28 05:35:21.644780 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:35:21.644796 | orchestrator | Saturday 28 March 2026 05:35:16 +0000 (0:00:00.797) 0:21:23.029 ******** 2026-03-28 05:35:21.644814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644830 | orchestrator | 2026-03-28 05:35:21.644846 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:35:21.644863 | orchestrator | Saturday 28 March 2026 05:35:17 +0000 (0:00:00.829) 0:21:23.859 ******** 2026-03-28 05:35:21.644879 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.644962 | orchestrator | 2026-03-28 05:35:21.644977 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:35:21.644991 | orchestrator | Saturday 28 March 2026 05:35:18 +0000 (0:00:00.797) 0:21:24.656 ******** 2026-03-28 05:35:21.645004 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.645018 | orchestrator | 2026-03-28 05:35:21.645032 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:35:21.645045 | orchestrator | Saturday 28 March 2026 05:35:19 +0000 (0:00:00.806) 0:21:25.463 ******** 2026-03-28 05:35:21.645059 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.645071 | orchestrator | 2026-03-28 05:35:21.645083 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:35:21.645110 | orchestrator | Saturday 28 March 2026 05:35:19 +0000 (0:00:00.949) 0:21:26.412 ******** 2026-03-28 05:35:21.645122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.645134 | orchestrator | 2026-03-28 05:35:21.645157 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:35:21.645170 | orchestrator | Saturday 28 March 2026 05:35:20 +0000 (0:00:00.861) 0:21:27.274 ******** 2026-03-28 05:35:21.645184 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:21.645197 | orchestrator | 2026-03-28 05:35:21.645211 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:35:21.645243 | orchestrator | Saturday 28 March 2026 05:35:21 +0000 (0:00:00.789) 0:21:28.063 ******** 2026-03-28 05:35:54.422722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.422867 | orchestrator | 2026-03-28 05:35:54.422897 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:35:54.422922 | orchestrator | Saturday 28 March 2026 05:35:22 +0000 (0:00:00.779) 0:21:28.842 ******** 2026-03-28 05:35:54.422943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.422964 | orchestrator | 2026-03-28 05:35:54.422985 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:35:54.423005 | orchestrator | Saturday 28 March 2026 05:35:23 +0000 (0:00:00.783) 0:21:29.626 ******** 2026-03-28 05:35:54.423027 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423048 | orchestrator | 2026-03-28 05:35:54.423069 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:35:54.423091 | orchestrator | Saturday 28 March 2026 05:35:23 +0000 (0:00:00.790) 0:21:30.417 ******** 2026-03-28 05:35:54.423112 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423134 | orchestrator | 2026-03-28 05:35:54.423155 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:35:54.423178 | orchestrator | Saturday 28 March 2026 05:35:24 +0000 (0:00:00.789) 0:21:31.206 ******** 2026-03-28 05:35:54.423199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423220 | orchestrator | 2026-03-28 05:35:54.423241 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:35:54.423263 | orchestrator | Saturday 28 March 2026 05:35:25 +0000 (0:00:00.810) 0:21:32.017 ******** 2026-03-28 05:35:54.423284 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423305 | orchestrator | 2026-03-28 05:35:54.423327 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:35:54.423348 | orchestrator | Saturday 28 March 2026 05:35:26 +0000 (0:00:00.777) 0:21:32.795 ******** 2026-03-28 05:35:54.423370 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423391 | orchestrator | 2026-03-28 05:35:54.423412 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:35:54.423434 | orchestrator | Saturday 28 March 2026 05:35:27 +0000 (0:00:00.803) 0:21:33.599 ******** 2026-03-28 05:35:54.423455 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423476 | orchestrator | 2026-03-28 05:35:54.423497 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:35:54.423519 | orchestrator | Saturday 28 March 2026 05:35:27 +0000 (0:00:00.788) 0:21:34.387 ******** 2026-03-28 05:35:54.423540 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423561 | orchestrator | 2026-03-28 05:35:54.423582 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:35:54.423603 | orchestrator | Saturday 28 March 2026 05:35:28 +0000 (0:00:00.791) 0:21:35.179 ******** 2026-03-28 05:35:54.423625 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423675 | orchestrator | 2026-03-28 05:35:54.423697 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:35:54.423718 | orchestrator | Saturday 28 March 2026 05:35:29 +0000 (0:00:00.822) 0:21:36.002 ******** 2026-03-28 05:35:54.423738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423758 | orchestrator | 2026-03-28 05:35:54.423810 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:35:54.423831 | orchestrator | Saturday 28 March 2026 05:35:30 +0000 (0:00:00.966) 0:21:36.968 ******** 2026-03-28 05:35:54.423851 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423871 | orchestrator | 2026-03-28 05:35:54.423891 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:35:54.423911 | orchestrator | Saturday 28 March 2026 05:35:31 +0000 (0:00:00.765) 0:21:37.733 ******** 2026-03-28 05:35:54.423931 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.423951 | orchestrator | 2026-03-28 05:35:54.423972 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:35:54.423990 | orchestrator | Saturday 28 March 2026 05:35:32 +0000 (0:00:00.778) 0:21:38.512 ******** 2026-03-28 05:35:54.424009 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424027 | orchestrator | 2026-03-28 05:35:54.424047 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:35:54.424066 | orchestrator | Saturday 28 March 2026 05:35:32 +0000 (0:00:00.762) 0:21:39.275 ******** 2026-03-28 05:35:54.424085 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424103 | orchestrator | 2026-03-28 05:35:54.424122 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:35:54.424141 | orchestrator | Saturday 28 March 2026 05:35:33 +0000 (0:00:00.810) 0:21:40.085 ******** 2026-03-28 05:35:54.424160 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424178 | orchestrator | 2026-03-28 05:35:54.424197 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:35:54.424215 | orchestrator | Saturday 28 March 2026 05:35:34 +0000 (0:00:00.782) 0:21:40.868 ******** 2026-03-28 05:35:54.424235 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424253 | orchestrator | 2026-03-28 05:35:54.424272 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:35:54.424290 | orchestrator | Saturday 28 March 2026 05:35:35 +0000 (0:00:00.781) 0:21:41.650 ******** 2026-03-28 05:35:54.424308 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424326 | orchestrator | 2026-03-28 05:35:54.424343 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:35:54.424361 | orchestrator | Saturday 28 March 2026 05:35:35 +0000 (0:00:00.764) 0:21:42.415 ******** 2026-03-28 05:35:54.424379 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424396 | orchestrator | 2026-03-28 05:35:54.424414 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:35:54.424451 | orchestrator | Saturday 28 March 2026 05:35:36 +0000 (0:00:00.776) 0:21:43.192 ******** 2026-03-28 05:35:54.424470 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424488 | orchestrator | 2026-03-28 05:35:54.424506 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:35:54.424524 | orchestrator | Saturday 28 March 2026 05:35:37 +0000 (0:00:00.862) 0:21:44.054 ******** 2026-03-28 05:35:54.424542 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424561 | orchestrator | 2026-03-28 05:35:54.424606 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:35:54.424625 | orchestrator | Saturday 28 March 2026 05:35:38 +0000 (0:00:00.779) 0:21:44.834 ******** 2026-03-28 05:35:54.424667 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424688 | orchestrator | 2026-03-28 05:35:54.424706 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:35:54.424725 | orchestrator | Saturday 28 March 2026 05:35:39 +0000 (0:00:00.769) 0:21:45.604 ******** 2026-03-28 05:35:54.424743 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424762 | orchestrator | 2026-03-28 05:35:54.424780 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:35:54.424798 | orchestrator | Saturday 28 March 2026 05:35:40 +0000 (0:00:00.934) 0:21:46.539 ******** 2026-03-28 05:35:54.424816 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424849 | orchestrator | 2026-03-28 05:35:54.424867 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:35:54.424886 | orchestrator | Saturday 28 March 2026 05:35:40 +0000 (0:00:00.809) 0:21:47.349 ******** 2026-03-28 05:35:54.424902 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424918 | orchestrator | 2026-03-28 05:35:54.424934 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:35:54.424953 | orchestrator | Saturday 28 March 2026 05:35:41 +0000 (0:00:00.779) 0:21:48.128 ******** 2026-03-28 05:35:54.424971 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.424989 | orchestrator | 2026-03-28 05:35:54.425008 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:35:54.425026 | orchestrator | Saturday 28 March 2026 05:35:42 +0000 (0:00:00.785) 0:21:48.913 ******** 2026-03-28 05:35:54.425045 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425064 | orchestrator | 2026-03-28 05:35:54.425082 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:35:54.425100 | orchestrator | Saturday 28 March 2026 05:35:43 +0000 (0:00:00.811) 0:21:49.725 ******** 2026-03-28 05:35:54.425119 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425138 | orchestrator | 2026-03-28 05:35:54.425154 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:35:54.425226 | orchestrator | Saturday 28 March 2026 05:35:44 +0000 (0:00:00.837) 0:21:50.562 ******** 2026-03-28 05:35:54.425245 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425262 | orchestrator | 2026-03-28 05:35:54.425280 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:35:54.425300 | orchestrator | Saturday 28 March 2026 05:35:44 +0000 (0:00:00.770) 0:21:51.333 ******** 2026-03-28 05:35:54.425318 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425336 | orchestrator | 2026-03-28 05:35:54.425353 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:35:54.425365 | orchestrator | Saturday 28 March 2026 05:35:45 +0000 (0:00:00.792) 0:21:52.126 ******** 2026-03-28 05:35:54.425384 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425403 | orchestrator | 2026-03-28 05:35:54.425421 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:35:54.425439 | orchestrator | Saturday 28 March 2026 05:35:46 +0000 (0:00:00.872) 0:21:52.998 ******** 2026-03-28 05:35:54.425458 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425477 | orchestrator | 2026-03-28 05:35:54.425495 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:35:54.425515 | orchestrator | Saturday 28 March 2026 05:35:47 +0000 (0:00:00.785) 0:21:53.784 ******** 2026-03-28 05:35:54.425534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425553 | orchestrator | 2026-03-28 05:35:54.425573 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:35:54.425590 | orchestrator | Saturday 28 March 2026 05:35:48 +0000 (0:00:00.885) 0:21:54.669 ******** 2026-03-28 05:35:54.425609 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425627 | orchestrator | 2026-03-28 05:35:54.425713 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:35:54.425737 | orchestrator | Saturday 28 March 2026 05:35:49 +0000 (0:00:00.776) 0:21:55.446 ******** 2026-03-28 05:35:54.425757 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425775 | orchestrator | 2026-03-28 05:35:54.425792 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:35:54.425805 | orchestrator | Saturday 28 March 2026 05:35:49 +0000 (0:00:00.767) 0:21:56.213 ******** 2026-03-28 05:35:54.425814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425824 | orchestrator | 2026-03-28 05:35:54.425834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:35:54.425857 | orchestrator | Saturday 28 March 2026 05:35:50 +0000 (0:00:00.988) 0:21:57.202 ******** 2026-03-28 05:35:54.425866 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425876 | orchestrator | 2026-03-28 05:35:54.425886 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:35:54.425895 | orchestrator | Saturday 28 March 2026 05:35:51 +0000 (0:00:00.767) 0:21:57.970 ******** 2026-03-28 05:35:54.425905 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425914 | orchestrator | 2026-03-28 05:35:54.425924 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:35:54.425933 | orchestrator | Saturday 28 March 2026 05:35:52 +0000 (0:00:00.871) 0:21:58.841 ******** 2026-03-28 05:35:54.425943 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:35:54.425953 | orchestrator | 2026-03-28 05:35:54.425971 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:35:54.425981 | orchestrator | Saturday 28 March 2026 05:35:53 +0000 (0:00:00.897) 0:21:59.739 ******** 2026-03-28 05:35:54.425991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:35:54.426001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:35:54.426064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:36:26.705083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705202 | orchestrator | 2026-03-28 05:36:26.705220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:36:26.705233 | orchestrator | Saturday 28 March 2026 05:35:54 +0000 (0:00:01.103) 0:22:00.843 ******** 2026-03-28 05:36:26.705245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:36:26.705256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:36:26.705268 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:36:26.705279 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705289 | orchestrator | 2026-03-28 05:36:26.705301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:36:26.705312 | orchestrator | Saturday 28 March 2026 05:35:55 +0000 (0:00:01.064) 0:22:01.907 ******** 2026-03-28 05:36:26.705324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:36:26.705335 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:36:26.705346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:36:26.705357 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705368 | orchestrator | 2026-03-28 05:36:26.705379 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:36:26.705390 | orchestrator | Saturday 28 March 2026 05:35:56 +0000 (0:00:01.078) 0:22:02.986 ******** 2026-03-28 05:36:26.705401 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705412 | orchestrator | 2026-03-28 05:36:26.705423 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:36:26.705434 | orchestrator | Saturday 28 March 2026 05:35:57 +0000 (0:00:00.836) 0:22:03.822 ******** 2026-03-28 05:36:26.705445 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 05:36:26.705456 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705467 | orchestrator | 2026-03-28 05:36:26.705478 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:36:26.705489 | orchestrator | Saturday 28 March 2026 05:35:58 +0000 (0:00:00.936) 0:22:04.759 ******** 2026-03-28 05:36:26.705500 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705511 | orchestrator | 2026-03-28 05:36:26.705522 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:36:26.705533 | orchestrator | Saturday 28 March 2026 05:35:59 +0000 (0:00:00.849) 0:22:05.608 ******** 2026-03-28 05:36:26.705543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:36:26.705554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:36:26.705565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:36:26.705601 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705613 | orchestrator | 2026-03-28 05:36:26.705653 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:36:26.705675 | orchestrator | Saturday 28 March 2026 05:36:00 +0000 (0:00:01.682) 0:22:07.291 ******** 2026-03-28 05:36:26.705696 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705716 | orchestrator | 2026-03-28 05:36:26.705736 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:36:26.705749 | orchestrator | Saturday 28 March 2026 05:36:01 +0000 (0:00:00.855) 0:22:08.146 ******** 2026-03-28 05:36:26.705762 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705774 | orchestrator | 2026-03-28 05:36:26.705787 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:36:26.705800 | orchestrator | Saturday 28 March 2026 05:36:02 +0000 (0:00:00.937) 0:22:09.084 ******** 2026-03-28 05:36:26.705812 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705824 | orchestrator | 2026-03-28 05:36:26.705836 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:36:26.705849 | orchestrator | Saturday 28 March 2026 05:36:03 +0000 (0:00:00.782) 0:22:09.867 ******** 2026-03-28 05:36:26.705862 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:36:26.705874 | orchestrator | 2026-03-28 05:36:26.705887 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-28 05:36:26.705898 | orchestrator | 2026-03-28 05:36:26.705909 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:36:26.705920 | orchestrator | Saturday 28 March 2026 05:36:04 +0000 (0:00:01.028) 0:22:10.895 ******** 2026-03-28 05:36:26.705931 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.705942 | orchestrator | 2026-03-28 05:36:26.705953 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:36:26.705964 | orchestrator | Saturday 28 March 2026 05:36:05 +0000 (0:00:00.841) 0:22:11.737 ******** 2026-03-28 05:36:26.705975 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.705986 | orchestrator | 2026-03-28 05:36:26.705997 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:36:26.706009 | orchestrator | Saturday 28 March 2026 05:36:06 +0000 (0:00:00.824) 0:22:12.562 ******** 2026-03-28 05:36:26.706084 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706108 | orchestrator | 2026-03-28 05:36:26.706137 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:36:26.706156 | orchestrator | Saturday 28 March 2026 05:36:06 +0000 (0:00:00.837) 0:22:13.399 ******** 2026-03-28 05:36:26.706175 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706194 | orchestrator | 2026-03-28 05:36:26.706213 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:36:26.706231 | orchestrator | Saturday 28 March 2026 05:36:07 +0000 (0:00:00.815) 0:22:14.215 ******** 2026-03-28 05:36:26.706270 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706291 | orchestrator | 2026-03-28 05:36:26.706310 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:36:26.706328 | orchestrator | Saturday 28 March 2026 05:36:08 +0000 (0:00:00.842) 0:22:15.057 ******** 2026-03-28 05:36:26.706347 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706366 | orchestrator | 2026-03-28 05:36:26.706385 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:36:26.706430 | orchestrator | Saturday 28 March 2026 05:36:09 +0000 (0:00:00.810) 0:22:15.868 ******** 2026-03-28 05:36:26.706447 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706459 | orchestrator | 2026-03-28 05:36:26.706470 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:36:26.706481 | orchestrator | Saturday 28 March 2026 05:36:10 +0000 (0:00:00.809) 0:22:16.677 ******** 2026-03-28 05:36:26.706492 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706516 | orchestrator | 2026-03-28 05:36:26.706527 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:36:26.706538 | orchestrator | Saturday 28 March 2026 05:36:11 +0000 (0:00:00.922) 0:22:17.599 ******** 2026-03-28 05:36:26.706549 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706560 | orchestrator | 2026-03-28 05:36:26.706571 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:36:26.706582 | orchestrator | Saturday 28 March 2026 05:36:11 +0000 (0:00:00.814) 0:22:18.413 ******** 2026-03-28 05:36:26.706593 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706604 | orchestrator | 2026-03-28 05:36:26.706615 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:36:26.706675 | orchestrator | Saturday 28 March 2026 05:36:12 +0000 (0:00:00.803) 0:22:19.217 ******** 2026-03-28 05:36:26.706688 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706699 | orchestrator | 2026-03-28 05:36:26.706710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:36:26.706721 | orchestrator | Saturday 28 March 2026 05:36:13 +0000 (0:00:00.794) 0:22:20.012 ******** 2026-03-28 05:36:26.706732 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706743 | orchestrator | 2026-03-28 05:36:26.706754 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:36:26.706764 | orchestrator | Saturday 28 March 2026 05:36:14 +0000 (0:00:00.772) 0:22:20.784 ******** 2026-03-28 05:36:26.706775 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706786 | orchestrator | 2026-03-28 05:36:26.706796 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:36:26.706807 | orchestrator | Saturday 28 March 2026 05:36:15 +0000 (0:00:00.849) 0:22:21.634 ******** 2026-03-28 05:36:26.706818 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706829 | orchestrator | 2026-03-28 05:36:26.706839 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:36:26.706850 | orchestrator | Saturday 28 March 2026 05:36:16 +0000 (0:00:00.873) 0:22:22.508 ******** 2026-03-28 05:36:26.706861 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706871 | orchestrator | 2026-03-28 05:36:26.706882 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:36:26.706893 | orchestrator | Saturday 28 March 2026 05:36:16 +0000 (0:00:00.783) 0:22:23.291 ******** 2026-03-28 05:36:26.706904 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706915 | orchestrator | 2026-03-28 05:36:26.706925 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:36:26.706936 | orchestrator | Saturday 28 March 2026 05:36:17 +0000 (0:00:00.862) 0:22:24.154 ******** 2026-03-28 05:36:26.706947 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.706958 | orchestrator | 2026-03-28 05:36:26.706969 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:36:26.706980 | orchestrator | Saturday 28 March 2026 05:36:18 +0000 (0:00:00.785) 0:22:24.939 ******** 2026-03-28 05:36:26.706990 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707001 | orchestrator | 2026-03-28 05:36:26.707012 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:36:26.707023 | orchestrator | Saturday 28 March 2026 05:36:19 +0000 (0:00:00.832) 0:22:25.771 ******** 2026-03-28 05:36:26.707034 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707044 | orchestrator | 2026-03-28 05:36:26.707056 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:36:26.707067 | orchestrator | Saturday 28 March 2026 05:36:20 +0000 (0:00:00.791) 0:22:26.563 ******** 2026-03-28 05:36:26.707078 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707089 | orchestrator | 2026-03-28 05:36:26.707100 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:36:26.707111 | orchestrator | Saturday 28 March 2026 05:36:21 +0000 (0:00:00.932) 0:22:27.495 ******** 2026-03-28 05:36:26.707121 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707139 | orchestrator | 2026-03-28 05:36:26.707150 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:36:26.707161 | orchestrator | Saturday 28 March 2026 05:36:21 +0000 (0:00:00.784) 0:22:28.280 ******** 2026-03-28 05:36:26.707172 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707182 | orchestrator | 2026-03-28 05:36:26.707193 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:36:26.707204 | orchestrator | Saturday 28 March 2026 05:36:22 +0000 (0:00:00.803) 0:22:29.083 ******** 2026-03-28 05:36:26.707214 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707225 | orchestrator | 2026-03-28 05:36:26.707236 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:36:26.707247 | orchestrator | Saturday 28 March 2026 05:36:23 +0000 (0:00:00.796) 0:22:29.880 ******** 2026-03-28 05:36:26.707258 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707268 | orchestrator | 2026-03-28 05:36:26.707282 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:36:26.707303 | orchestrator | Saturday 28 March 2026 05:36:24 +0000 (0:00:00.816) 0:22:30.696 ******** 2026-03-28 05:36:26.707322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707342 | orchestrator | 2026-03-28 05:36:26.707369 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:36:26.707389 | orchestrator | Saturday 28 March 2026 05:36:25 +0000 (0:00:00.811) 0:22:31.508 ******** 2026-03-28 05:36:26.707408 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:26.707427 | orchestrator | 2026-03-28 05:36:26.707448 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:36:26.707468 | orchestrator | Saturday 28 March 2026 05:36:25 +0000 (0:00:00.796) 0:22:32.305 ******** 2026-03-28 05:36:26.707500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587003 | orchestrator | 2026-03-28 05:36:58.587122 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:36:58.587138 | orchestrator | Saturday 28 March 2026 05:36:26 +0000 (0:00:00.820) 0:22:33.125 ******** 2026-03-28 05:36:58.587150 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587162 | orchestrator | 2026-03-28 05:36:58.587173 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:36:58.587185 | orchestrator | Saturday 28 March 2026 05:36:27 +0000 (0:00:00.801) 0:22:33.926 ******** 2026-03-28 05:36:58.587196 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587207 | orchestrator | 2026-03-28 05:36:58.587217 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:36:58.587228 | orchestrator | Saturday 28 March 2026 05:36:28 +0000 (0:00:00.814) 0:22:34.741 ******** 2026-03-28 05:36:58.587239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587250 | orchestrator | 2026-03-28 05:36:58.587261 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:36:58.587272 | orchestrator | Saturday 28 March 2026 05:36:29 +0000 (0:00:00.809) 0:22:35.551 ******** 2026-03-28 05:36:58.587283 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587294 | orchestrator | 2026-03-28 05:36:58.587305 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:36:58.587317 | orchestrator | Saturday 28 March 2026 05:36:29 +0000 (0:00:00.809) 0:22:36.360 ******** 2026-03-28 05:36:58.587328 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587339 | orchestrator | 2026-03-28 05:36:58.587350 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:36:58.587361 | orchestrator | Saturday 28 March 2026 05:36:30 +0000 (0:00:00.997) 0:22:37.358 ******** 2026-03-28 05:36:58.587372 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587382 | orchestrator | 2026-03-28 05:36:58.587393 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:36:58.587404 | orchestrator | Saturday 28 March 2026 05:36:31 +0000 (0:00:00.830) 0:22:38.188 ******** 2026-03-28 05:36:58.587440 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587452 | orchestrator | 2026-03-28 05:36:58.587463 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:36:58.587474 | orchestrator | Saturday 28 March 2026 05:36:32 +0000 (0:00:00.812) 0:22:39.001 ******** 2026-03-28 05:36:58.587485 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587495 | orchestrator | 2026-03-28 05:36:58.587506 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:36:58.587517 | orchestrator | Saturday 28 March 2026 05:36:33 +0000 (0:00:00.830) 0:22:39.832 ******** 2026-03-28 05:36:58.587528 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587539 | orchestrator | 2026-03-28 05:36:58.587551 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:36:58.587563 | orchestrator | Saturday 28 March 2026 05:36:34 +0000 (0:00:00.825) 0:22:40.657 ******** 2026-03-28 05:36:58.587575 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587587 | orchestrator | 2026-03-28 05:36:58.587600 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:36:58.587644 | orchestrator | Saturday 28 March 2026 05:36:35 +0000 (0:00:00.795) 0:22:41.452 ******** 2026-03-28 05:36:58.587667 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587680 | orchestrator | 2026-03-28 05:36:58.587691 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:36:58.587704 | orchestrator | Saturday 28 March 2026 05:36:35 +0000 (0:00:00.802) 0:22:42.255 ******** 2026-03-28 05:36:58.587716 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587728 | orchestrator | 2026-03-28 05:36:58.587741 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:36:58.587755 | orchestrator | Saturday 28 March 2026 05:36:36 +0000 (0:00:00.814) 0:22:43.070 ******** 2026-03-28 05:36:58.587768 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587780 | orchestrator | 2026-03-28 05:36:58.587792 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:36:58.587805 | orchestrator | Saturday 28 March 2026 05:36:37 +0000 (0:00:00.890) 0:22:43.961 ******** 2026-03-28 05:36:58.587818 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587830 | orchestrator | 2026-03-28 05:36:58.587842 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:36:58.587855 | orchestrator | Saturday 28 March 2026 05:36:38 +0000 (0:00:00.778) 0:22:44.740 ******** 2026-03-28 05:36:58.587867 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587880 | orchestrator | 2026-03-28 05:36:58.587892 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:36:58.587905 | orchestrator | Saturday 28 March 2026 05:36:39 +0000 (0:00:00.782) 0:22:45.522 ******** 2026-03-28 05:36:58.587916 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587927 | orchestrator | 2026-03-28 05:36:58.587937 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:36:58.587948 | orchestrator | Saturday 28 March 2026 05:36:39 +0000 (0:00:00.782) 0:22:46.305 ******** 2026-03-28 05:36:58.587959 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.587969 | orchestrator | 2026-03-28 05:36:58.587980 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:36:58.587991 | orchestrator | Saturday 28 March 2026 05:36:40 +0000 (0:00:00.784) 0:22:47.089 ******** 2026-03-28 05:36:58.588002 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588013 | orchestrator | 2026-03-28 05:36:58.588038 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:36:58.588049 | orchestrator | Saturday 28 March 2026 05:36:41 +0000 (0:00:00.877) 0:22:47.966 ******** 2026-03-28 05:36:58.588060 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588071 | orchestrator | 2026-03-28 05:36:58.588082 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:36:58.588102 | orchestrator | Saturday 28 March 2026 05:36:42 +0000 (0:00:00.897) 0:22:48.863 ******** 2026-03-28 05:36:58.588129 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588140 | orchestrator | 2026-03-28 05:36:58.588151 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:36:58.588162 | orchestrator | Saturday 28 March 2026 05:36:43 +0000 (0:00:00.781) 0:22:49.645 ******** 2026-03-28 05:36:58.588173 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588184 | orchestrator | 2026-03-28 05:36:58.588195 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:36:58.588205 | orchestrator | Saturday 28 March 2026 05:36:44 +0000 (0:00:00.892) 0:22:50.537 ******** 2026-03-28 05:36:58.588216 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588227 | orchestrator | 2026-03-28 05:36:58.588238 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:36:58.588248 | orchestrator | Saturday 28 March 2026 05:36:45 +0000 (0:00:00.913) 0:22:51.451 ******** 2026-03-28 05:36:58.588259 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588270 | orchestrator | 2026-03-28 05:36:58.588281 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:36:58.588293 | orchestrator | Saturday 28 March 2026 05:36:45 +0000 (0:00:00.916) 0:22:52.368 ******** 2026-03-28 05:36:58.588304 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588314 | orchestrator | 2026-03-28 05:36:58.588325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:36:58.588336 | orchestrator | Saturday 28 March 2026 05:36:46 +0000 (0:00:00.799) 0:22:53.167 ******** 2026-03-28 05:36:58.588347 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588358 | orchestrator | 2026-03-28 05:36:58.588368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:36:58.588379 | orchestrator | Saturday 28 March 2026 05:36:47 +0000 (0:00:00.836) 0:22:54.004 ******** 2026-03-28 05:36:58.588390 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588401 | orchestrator | 2026-03-28 05:36:58.588412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:36:58.588423 | orchestrator | Saturday 28 March 2026 05:36:48 +0000 (0:00:00.805) 0:22:54.810 ******** 2026-03-28 05:36:58.588433 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588444 | orchestrator | 2026-03-28 05:36:58.588455 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:36:58.588466 | orchestrator | Saturday 28 March 2026 05:36:49 +0000 (0:00:00.797) 0:22:55.608 ******** 2026-03-28 05:36:58.588477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:36:58.588488 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:36:58.588499 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:36:58.588510 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588521 | orchestrator | 2026-03-28 05:36:58.588532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:36:58.588543 | orchestrator | Saturday 28 March 2026 05:36:50 +0000 (0:00:01.091) 0:22:56.699 ******** 2026-03-28 05:36:58.588554 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:36:58.588564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:36:58.588575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:36:58.588586 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588597 | orchestrator | 2026-03-28 05:36:58.588635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:36:58.588646 | orchestrator | Saturday 28 March 2026 05:36:51 +0000 (0:00:01.443) 0:22:58.143 ******** 2026-03-28 05:36:58.588657 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:36:58.588668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:36:58.588679 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:36:58.588697 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588707 | orchestrator | 2026-03-28 05:36:58.588718 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:36:58.588729 | orchestrator | Saturday 28 March 2026 05:36:53 +0000 (0:00:01.455) 0:22:59.598 ******** 2026-03-28 05:36:58.588740 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588751 | orchestrator | 2026-03-28 05:36:58.588761 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:36:58.588772 | orchestrator | Saturday 28 March 2026 05:36:54 +0000 (0:00:00.899) 0:23:00.498 ******** 2026-03-28 05:36:58.588784 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 05:36:58.588794 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588805 | orchestrator | 2026-03-28 05:36:58.588816 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:36:58.588827 | orchestrator | Saturday 28 March 2026 05:36:54 +0000 (0:00:00.912) 0:23:01.411 ******** 2026-03-28 05:36:58.588838 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588848 | orchestrator | 2026-03-28 05:36:58.588859 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:36:58.588870 | orchestrator | Saturday 28 March 2026 05:36:55 +0000 (0:00:00.873) 0:23:02.284 ******** 2026-03-28 05:36:58.588880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:36:58.588891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:36:58.588902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:36:58.588913 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588923 | orchestrator | 2026-03-28 05:36:58.588940 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:36:58.588951 | orchestrator | Saturday 28 March 2026 05:36:56 +0000 (0:00:01.109) 0:23:03.394 ******** 2026-03-28 05:36:58.588962 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:36:58.588973 | orchestrator | 2026-03-28 05:36:58.588983 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:36:58.588994 | orchestrator | Saturday 28 March 2026 05:36:57 +0000 (0:00:00.789) 0:23:04.183 ******** 2026-03-28 05:36:58.589011 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:37:40.445922 | orchestrator | 2026-03-28 05:37:40.446104 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:37:40.446122 | orchestrator | Saturday 28 March 2026 05:36:58 +0000 (0:00:00.826) 0:23:05.010 ******** 2026-03-28 05:37:40.446134 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:37:40.446144 | orchestrator | 2026-03-28 05:37:40.446154 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:37:40.446164 | orchestrator | Saturday 28 March 2026 05:36:59 +0000 (0:00:00.823) 0:23:05.833 ******** 2026-03-28 05:37:40.446174 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:37:40.446184 | orchestrator | 2026-03-28 05:37:40.446194 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-28 05:37:40.446204 | orchestrator | 2026-03-28 05:37:40.446214 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:37:40.446224 | orchestrator | Saturday 28 March 2026 05:37:00 +0000 (0:00:01.413) 0:23:07.247 ******** 2026-03-28 05:37:40.446233 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:37:40.446243 | orchestrator | 2026-03-28 05:37:40.446253 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-28 05:37:40.446263 | orchestrator | Saturday 28 March 2026 05:37:13 +0000 (0:00:13.007) 0:23:20.255 ******** 2026-03-28 05:37:40.446272 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:37:40.446282 | orchestrator | 2026-03-28 05:37:40.446291 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:37:40.446301 | orchestrator | Saturday 28 March 2026 05:37:16 +0000 (0:00:02.856) 0:23:23.112 ******** 2026-03-28 05:37:40.446311 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-28 05:37:40.446346 | orchestrator | 2026-03-28 05:37:40.446356 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:37:40.446366 | orchestrator | Saturday 28 March 2026 05:37:17 +0000 (0:00:01.202) 0:23:24.315 ******** 2026-03-28 05:37:40.446376 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446386 | orchestrator | 2026-03-28 05:37:40.446396 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:37:40.446405 | orchestrator | Saturday 28 March 2026 05:37:19 +0000 (0:00:01.463) 0:23:25.779 ******** 2026-03-28 05:37:40.446416 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446426 | orchestrator | 2026-03-28 05:37:40.446436 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:37:40.446445 | orchestrator | Saturday 28 March 2026 05:37:20 +0000 (0:00:01.193) 0:23:26.972 ******** 2026-03-28 05:37:40.446455 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446466 | orchestrator | 2026-03-28 05:37:40.446477 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:37:40.446489 | orchestrator | Saturday 28 March 2026 05:37:22 +0000 (0:00:01.549) 0:23:28.522 ******** 2026-03-28 05:37:40.446501 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446512 | orchestrator | 2026-03-28 05:37:40.446523 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:37:40.446534 | orchestrator | Saturday 28 March 2026 05:37:23 +0000 (0:00:01.170) 0:23:29.692 ******** 2026-03-28 05:37:40.446545 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446556 | orchestrator | 2026-03-28 05:37:40.446567 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:37:40.446579 | orchestrator | Saturday 28 March 2026 05:37:24 +0000 (0:00:01.159) 0:23:30.852 ******** 2026-03-28 05:37:40.446615 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446626 | orchestrator | 2026-03-28 05:37:40.446637 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:37:40.446649 | orchestrator | Saturday 28 March 2026 05:37:25 +0000 (0:00:01.221) 0:23:32.074 ******** 2026-03-28 05:37:40.446660 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:40.446671 | orchestrator | 2026-03-28 05:37:40.446684 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:37:40.446695 | orchestrator | Saturday 28 March 2026 05:37:26 +0000 (0:00:01.185) 0:23:33.259 ******** 2026-03-28 05:37:40.446706 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446717 | orchestrator | 2026-03-28 05:37:40.446728 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:37:40.446740 | orchestrator | Saturday 28 March 2026 05:37:27 +0000 (0:00:01.138) 0:23:34.398 ******** 2026-03-28 05:37:40.446752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:37:40.446763 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:37:40.446774 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:37:40.446785 | orchestrator | 2026-03-28 05:37:40.446797 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:37:40.446808 | orchestrator | Saturday 28 March 2026 05:37:30 +0000 (0:00:02.175) 0:23:36.576 ******** 2026-03-28 05:37:40.446819 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:40.446830 | orchestrator | 2026-03-28 05:37:40.446842 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:37:40.446852 | orchestrator | Saturday 28 March 2026 05:37:31 +0000 (0:00:01.296) 0:23:37.873 ******** 2026-03-28 05:37:40.446862 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:37:40.446872 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:37:40.446882 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:37:40.446891 | orchestrator | 2026-03-28 05:37:40.446917 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:37:40.446935 | orchestrator | Saturday 28 March 2026 05:37:34 +0000 (0:00:03.311) 0:23:41.184 ******** 2026-03-28 05:37:40.446945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:37:40.446955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:37:40.446965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:37:40.446974 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:40.446984 | orchestrator | 2026-03-28 05:37:40.447009 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:37:40.447020 | orchestrator | Saturday 28 March 2026 05:37:36 +0000 (0:00:01.507) 0:23:42.691 ******** 2026-03-28 05:37:40.447031 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447044 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447054 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447064 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:40.447074 | orchestrator | 2026-03-28 05:37:40.447084 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:37:40.447094 | orchestrator | Saturday 28 March 2026 05:37:37 +0000 (0:00:01.720) 0:23:44.412 ******** 2026-03-28 05:37:40.447106 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447118 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447128 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:37:40.447138 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:40.447148 | orchestrator | 2026-03-28 05:37:40.447158 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:37:40.447168 | orchestrator | Saturday 28 March 2026 05:37:39 +0000 (0:00:01.191) 0:23:45.603 ******** 2026-03-28 05:37:40.447179 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:37:32.424975', 'end': '2026-03-28 05:37:32.470734', 'delta': '0:00:00.045759', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:37:40.447204 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:37:32.941028', 'end': '2026-03-28 05:37:32.991871', 'delta': '0:00:00.050843', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:37:40.447222 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:37:33.509398', 'end': '2026-03-28 05:37:33.556675', 'delta': '0:00:00.047277', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:37:59.358812 | orchestrator | 2026-03-28 05:37:59.358929 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:37:59.358946 | orchestrator | Saturday 28 March 2026 05:37:40 +0000 (0:00:01.263) 0:23:46.866 ******** 2026-03-28 05:37:59.358958 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:59.358970 | orchestrator | 2026-03-28 05:37:59.358982 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:37:59.358993 | orchestrator | Saturday 28 March 2026 05:37:41 +0000 (0:00:01.271) 0:23:48.138 ******** 2026-03-28 05:37:59.359004 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359016 | orchestrator | 2026-03-28 05:37:59.359027 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:37:59.359038 | orchestrator | Saturday 28 March 2026 05:37:43 +0000 (0:00:01.310) 0:23:49.448 ******** 2026-03-28 05:37:59.359049 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:59.359060 | orchestrator | 2026-03-28 05:37:59.359071 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:37:59.359082 | orchestrator | Saturday 28 March 2026 05:37:44 +0000 (0:00:01.133) 0:23:50.582 ******** 2026-03-28 05:37:59.359093 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:59.359112 | orchestrator | 2026-03-28 05:37:59.359132 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:37:59.359158 | orchestrator | Saturday 28 March 2026 05:37:46 +0000 (0:00:02.075) 0:23:52.658 ******** 2026-03-28 05:37:59.359184 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:37:59.359202 | orchestrator | 2026-03-28 05:37:59.359226 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:37:59.359250 | orchestrator | Saturday 28 March 2026 05:37:47 +0000 (0:00:01.199) 0:23:53.858 ******** 2026-03-28 05:37:59.359269 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359290 | orchestrator | 2026-03-28 05:37:59.359309 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:37:59.359327 | orchestrator | Saturday 28 March 2026 05:37:48 +0000 (0:00:01.123) 0:23:54.981 ******** 2026-03-28 05:37:59.359340 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359353 | orchestrator | 2026-03-28 05:37:59.359366 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:37:59.359406 | orchestrator | Saturday 28 March 2026 05:37:49 +0000 (0:00:01.235) 0:23:56.217 ******** 2026-03-28 05:37:59.359419 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359432 | orchestrator | 2026-03-28 05:37:59.359443 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:37:59.359454 | orchestrator | Saturday 28 March 2026 05:37:50 +0000 (0:00:01.206) 0:23:57.423 ******** 2026-03-28 05:37:59.359465 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359476 | orchestrator | 2026-03-28 05:37:59.359487 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:37:59.359498 | orchestrator | Saturday 28 March 2026 05:37:52 +0000 (0:00:01.151) 0:23:58.575 ******** 2026-03-28 05:37:59.359509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359520 | orchestrator | 2026-03-28 05:37:59.359534 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:37:59.359553 | orchestrator | Saturday 28 March 2026 05:37:53 +0000 (0:00:01.288) 0:23:59.864 ******** 2026-03-28 05:37:59.359571 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359618 | orchestrator | 2026-03-28 05:37:59.359636 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:37:59.359656 | orchestrator | Saturday 28 March 2026 05:37:54 +0000 (0:00:01.140) 0:24:01.004 ******** 2026-03-28 05:37:59.359675 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359694 | orchestrator | 2026-03-28 05:37:59.359706 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:37:59.359717 | orchestrator | Saturday 28 March 2026 05:37:55 +0000 (0:00:01.179) 0:24:02.184 ******** 2026-03-28 05:37:59.359728 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359739 | orchestrator | 2026-03-28 05:37:59.359750 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:37:59.359762 | orchestrator | Saturday 28 March 2026 05:37:56 +0000 (0:00:01.149) 0:24:03.333 ******** 2026-03-28 05:37:59.359772 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:37:59.359783 | orchestrator | 2026-03-28 05:37:59.359794 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:37:59.359805 | orchestrator | Saturday 28 March 2026 05:37:58 +0000 (0:00:01.192) 0:24:04.526 ******** 2026-03-28 05:37:59.359837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.359857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.359900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.359923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:37:59.359958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.359971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.359982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:37:59.360014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:38:00.640889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:38:00.641045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:38:00.641065 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:00.641079 | orchestrator | 2026-03-28 05:38:00.641091 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:38:00.641103 | orchestrator | Saturday 28 March 2026 05:37:59 +0000 (0:00:01.250) 0:24:05.777 ******** 2026-03-28 05:38:00.641116 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641130 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641222 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641233 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641254 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:00.641276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:41.984201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:38:41.984321 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984341 | orchestrator | 2026-03-28 05:38:41.984355 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:38:41.984368 | orchestrator | Saturday 28 March 2026 05:38:00 +0000 (0:00:01.283) 0:24:07.061 ******** 2026-03-28 05:38:41.984379 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.984391 | orchestrator | 2026-03-28 05:38:41.984403 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:38:41.984414 | orchestrator | Saturday 28 March 2026 05:38:02 +0000 (0:00:01.572) 0:24:08.634 ******** 2026-03-28 05:38:41.984425 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.984436 | orchestrator | 2026-03-28 05:38:41.984447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:38:41.984458 | orchestrator | Saturday 28 March 2026 05:38:03 +0000 (0:00:01.181) 0:24:09.816 ******** 2026-03-28 05:38:41.984469 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.984480 | orchestrator | 2026-03-28 05:38:41.984491 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:38:41.984502 | orchestrator | Saturday 28 March 2026 05:38:04 +0000 (0:00:01.501) 0:24:11.317 ******** 2026-03-28 05:38:41.984513 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984525 | orchestrator | 2026-03-28 05:38:41.984537 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:38:41.984548 | orchestrator | Saturday 28 March 2026 05:38:06 +0000 (0:00:01.159) 0:24:12.476 ******** 2026-03-28 05:38:41.984609 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984620 | orchestrator | 2026-03-28 05:38:41.984631 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:38:41.984642 | orchestrator | Saturday 28 March 2026 05:38:07 +0000 (0:00:01.317) 0:24:13.794 ******** 2026-03-28 05:38:41.984653 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984700 | orchestrator | 2026-03-28 05:38:41.984712 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:38:41.984723 | orchestrator | Saturday 28 March 2026 05:38:08 +0000 (0:00:01.188) 0:24:14.983 ******** 2026-03-28 05:38:41.984736 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:38:41.984750 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 05:38:41.984762 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 05:38:41.984775 | orchestrator | 2026-03-28 05:38:41.984787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:38:41.984800 | orchestrator | Saturday 28 March 2026 05:38:10 +0000 (0:00:02.181) 0:24:17.165 ******** 2026-03-28 05:38:41.984812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 05:38:41.984824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 05:38:41.984836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 05:38:41.984873 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984886 | orchestrator | 2026-03-28 05:38:41.984898 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:38:41.984911 | orchestrator | Saturday 28 March 2026 05:38:11 +0000 (0:00:01.165) 0:24:18.331 ******** 2026-03-28 05:38:41.984923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.984935 | orchestrator | 2026-03-28 05:38:41.984948 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:38:41.984961 | orchestrator | Saturday 28 March 2026 05:38:13 +0000 (0:00:01.118) 0:24:19.450 ******** 2026-03-28 05:38:41.984973 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:38:41.984986 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:38:41.985000 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:38:41.985013 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:38:41.985025 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:38:41.985038 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:38:41.985051 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:38:41.985063 | orchestrator | 2026-03-28 05:38:41.985076 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:38:41.985088 | orchestrator | Saturday 28 March 2026 05:38:14 +0000 (0:00:01.874) 0:24:21.324 ******** 2026-03-28 05:38:41.985099 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:38:41.985110 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:38:41.985121 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:38:41.985132 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:38:41.985160 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:38:41.985172 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:38:41.985183 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:38:41.985193 | orchestrator | 2026-03-28 05:38:41.985204 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:38:41.985304 | orchestrator | Saturday 28 March 2026 05:38:17 +0000 (0:00:02.803) 0:24:24.128 ******** 2026-03-28 05:38:41.985324 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-28 05:38:41.985337 | orchestrator | 2026-03-28 05:38:41.985348 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:38:41.985359 | orchestrator | Saturday 28 March 2026 05:38:18 +0000 (0:00:01.164) 0:24:25.293 ******** 2026-03-28 05:38:41.985370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-28 05:38:41.985381 | orchestrator | 2026-03-28 05:38:41.985392 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:38:41.985403 | orchestrator | Saturday 28 March 2026 05:38:19 +0000 (0:00:01.138) 0:24:26.431 ******** 2026-03-28 05:38:41.985414 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.985425 | orchestrator | 2026-03-28 05:38:41.985436 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:38:41.985447 | orchestrator | Saturday 28 March 2026 05:38:21 +0000 (0:00:01.587) 0:24:28.018 ******** 2026-03-28 05:38:41.985457 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985468 | orchestrator | 2026-03-28 05:38:41.985479 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:38:41.985490 | orchestrator | Saturday 28 March 2026 05:38:22 +0000 (0:00:01.140) 0:24:29.159 ******** 2026-03-28 05:38:41.985511 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985522 | orchestrator | 2026-03-28 05:38:41.985533 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:38:41.985544 | orchestrator | Saturday 28 March 2026 05:38:23 +0000 (0:00:01.132) 0:24:30.291 ******** 2026-03-28 05:38:41.985573 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985584 | orchestrator | 2026-03-28 05:38:41.985595 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:38:41.985606 | orchestrator | Saturday 28 March 2026 05:38:25 +0000 (0:00:01.243) 0:24:31.535 ******** 2026-03-28 05:38:41.985616 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.985627 | orchestrator | 2026-03-28 05:38:41.985638 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:38:41.985649 | orchestrator | Saturday 28 March 2026 05:38:26 +0000 (0:00:01.569) 0:24:33.105 ******** 2026-03-28 05:38:41.985659 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985670 | orchestrator | 2026-03-28 05:38:41.985681 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:38:41.985692 | orchestrator | Saturday 28 March 2026 05:38:27 +0000 (0:00:01.165) 0:24:34.270 ******** 2026-03-28 05:38:41.985702 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985713 | orchestrator | 2026-03-28 05:38:41.985724 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:38:41.985734 | orchestrator | Saturday 28 March 2026 05:38:28 +0000 (0:00:01.153) 0:24:35.423 ******** 2026-03-28 05:38:41.985745 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.985756 | orchestrator | 2026-03-28 05:38:41.985767 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:38:41.985778 | orchestrator | Saturday 28 March 2026 05:38:30 +0000 (0:00:01.653) 0:24:37.078 ******** 2026-03-28 05:38:41.985789 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.985799 | orchestrator | 2026-03-28 05:38:41.985810 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:38:41.985826 | orchestrator | Saturday 28 March 2026 05:38:32 +0000 (0:00:01.704) 0:24:38.782 ******** 2026-03-28 05:38:41.985837 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985848 | orchestrator | 2026-03-28 05:38:41.985859 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:38:41.985870 | orchestrator | Saturday 28 March 2026 05:38:33 +0000 (0:00:01.164) 0:24:39.947 ******** 2026-03-28 05:38:41.985880 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:38:41.985891 | orchestrator | 2026-03-28 05:38:41.985902 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:38:41.985912 | orchestrator | Saturday 28 March 2026 05:38:34 +0000 (0:00:01.199) 0:24:41.147 ******** 2026-03-28 05:38:41.985923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985934 | orchestrator | 2026-03-28 05:38:41.985944 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:38:41.985955 | orchestrator | Saturday 28 March 2026 05:38:35 +0000 (0:00:01.181) 0:24:42.328 ******** 2026-03-28 05:38:41.985966 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.985977 | orchestrator | 2026-03-28 05:38:41.985987 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:38:41.985998 | orchestrator | Saturday 28 March 2026 05:38:37 +0000 (0:00:01.173) 0:24:43.502 ******** 2026-03-28 05:38:41.986009 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.986096 | orchestrator | 2026-03-28 05:38:41.986107 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:38:41.986118 | orchestrator | Saturday 28 March 2026 05:38:38 +0000 (0:00:01.157) 0:24:44.660 ******** 2026-03-28 05:38:41.986129 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.986139 | orchestrator | 2026-03-28 05:38:41.986150 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:38:41.986161 | orchestrator | Saturday 28 March 2026 05:38:39 +0000 (0:00:01.146) 0:24:45.806 ******** 2026-03-28 05:38:41.986180 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:38:41.986190 | orchestrator | 2026-03-28 05:38:41.986201 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:38:41.986212 | orchestrator | Saturday 28 March 2026 05:38:40 +0000 (0:00:01.290) 0:24:47.097 ******** 2026-03-28 05:38:41.986234 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.703764 | orchestrator | 2026-03-28 05:39:32.703909 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:39:32.703938 | orchestrator | Saturday 28 March 2026 05:38:41 +0000 (0:00:01.312) 0:24:48.409 ******** 2026-03-28 05:39:32.703958 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.703977 | orchestrator | 2026-03-28 05:39:32.703996 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:39:32.704015 | orchestrator | Saturday 28 March 2026 05:38:43 +0000 (0:00:01.231) 0:24:49.641 ******** 2026-03-28 05:39:32.704032 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.704052 | orchestrator | 2026-03-28 05:39:32.704070 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:39:32.704087 | orchestrator | Saturday 28 March 2026 05:38:44 +0000 (0:00:01.163) 0:24:50.805 ******** 2026-03-28 05:39:32.704107 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704126 | orchestrator | 2026-03-28 05:39:32.704144 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:39:32.704162 | orchestrator | Saturday 28 March 2026 05:38:45 +0000 (0:00:01.154) 0:24:51.959 ******** 2026-03-28 05:39:32.704180 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704200 | orchestrator | 2026-03-28 05:39:32.704219 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:39:32.704237 | orchestrator | Saturday 28 March 2026 05:38:46 +0000 (0:00:01.166) 0:24:53.126 ******** 2026-03-28 05:39:32.704256 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704275 | orchestrator | 2026-03-28 05:39:32.704293 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:39:32.704312 | orchestrator | Saturday 28 March 2026 05:38:47 +0000 (0:00:01.216) 0:24:54.343 ******** 2026-03-28 05:39:32.704331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704349 | orchestrator | 2026-03-28 05:39:32.704368 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:39:32.704390 | orchestrator | Saturday 28 March 2026 05:38:49 +0000 (0:00:01.133) 0:24:55.477 ******** 2026-03-28 05:39:32.704408 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704428 | orchestrator | 2026-03-28 05:39:32.704448 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:39:32.704467 | orchestrator | Saturday 28 March 2026 05:38:50 +0000 (0:00:01.164) 0:24:56.642 ******** 2026-03-28 05:39:32.704486 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704505 | orchestrator | 2026-03-28 05:39:32.704523 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:39:32.704570 | orchestrator | Saturday 28 March 2026 05:38:51 +0000 (0:00:01.190) 0:24:57.833 ******** 2026-03-28 05:39:32.704589 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704606 | orchestrator | 2026-03-28 05:39:32.704623 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:39:32.704645 | orchestrator | Saturday 28 March 2026 05:38:52 +0000 (0:00:01.159) 0:24:58.993 ******** 2026-03-28 05:39:32.704664 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704682 | orchestrator | 2026-03-28 05:39:32.704701 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:39:32.704721 | orchestrator | Saturday 28 March 2026 05:38:53 +0000 (0:00:01.246) 0:25:00.240 ******** 2026-03-28 05:39:32.704740 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704758 | orchestrator | 2026-03-28 05:39:32.704778 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:39:32.704797 | orchestrator | Saturday 28 March 2026 05:38:54 +0000 (0:00:01.140) 0:25:01.380 ******** 2026-03-28 05:39:32.704845 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704857 | orchestrator | 2026-03-28 05:39:32.704868 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:39:32.704879 | orchestrator | Saturday 28 March 2026 05:38:56 +0000 (0:00:01.377) 0:25:02.758 ******** 2026-03-28 05:39:32.704890 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704901 | orchestrator | 2026-03-28 05:39:32.704926 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:39:32.704937 | orchestrator | Saturday 28 March 2026 05:38:57 +0000 (0:00:01.207) 0:25:03.966 ******** 2026-03-28 05:39:32.704948 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.704959 | orchestrator | 2026-03-28 05:39:32.704970 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:39:32.704981 | orchestrator | Saturday 28 March 2026 05:38:58 +0000 (0:00:01.138) 0:25:05.104 ******** 2026-03-28 05:39:32.704992 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705002 | orchestrator | 2026-03-28 05:39:32.705013 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:39:32.705024 | orchestrator | Saturday 28 March 2026 05:39:00 +0000 (0:00:02.030) 0:25:07.135 ******** 2026-03-28 05:39:32.705035 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705045 | orchestrator | 2026-03-28 05:39:32.705056 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:39:32.705067 | orchestrator | Saturday 28 March 2026 05:39:03 +0000 (0:00:02.407) 0:25:09.543 ******** 2026-03-28 05:39:32.705078 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-28 05:39:32.705090 | orchestrator | 2026-03-28 05:39:32.705101 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:39:32.705112 | orchestrator | Saturday 28 March 2026 05:39:04 +0000 (0:00:01.130) 0:25:10.673 ******** 2026-03-28 05:39:32.705123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705134 | orchestrator | 2026-03-28 05:39:32.705144 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:39:32.705155 | orchestrator | Saturday 28 March 2026 05:39:05 +0000 (0:00:01.196) 0:25:11.869 ******** 2026-03-28 05:39:32.705166 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705177 | orchestrator | 2026-03-28 05:39:32.705188 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:39:32.705198 | orchestrator | Saturday 28 March 2026 05:39:06 +0000 (0:00:01.205) 0:25:13.075 ******** 2026-03-28 05:39:32.705233 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:39:32.705245 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:39:32.705256 | orchestrator | 2026-03-28 05:39:32.705267 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:39:32.705278 | orchestrator | Saturday 28 March 2026 05:39:08 +0000 (0:00:01.905) 0:25:14.981 ******** 2026-03-28 05:39:32.705288 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705299 | orchestrator | 2026-03-28 05:39:32.705310 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:39:32.705321 | orchestrator | Saturday 28 March 2026 05:39:10 +0000 (0:00:01.559) 0:25:16.541 ******** 2026-03-28 05:39:32.705331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705342 | orchestrator | 2026-03-28 05:39:32.705353 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:39:32.705364 | orchestrator | Saturday 28 March 2026 05:39:11 +0000 (0:00:01.280) 0:25:17.822 ******** 2026-03-28 05:39:32.705375 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705385 | orchestrator | 2026-03-28 05:39:32.705396 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:39:32.705407 | orchestrator | Saturday 28 March 2026 05:39:12 +0000 (0:00:01.243) 0:25:19.066 ******** 2026-03-28 05:39:32.705418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705437 | orchestrator | 2026-03-28 05:39:32.705448 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:39:32.705459 | orchestrator | Saturday 28 March 2026 05:39:13 +0000 (0:00:01.206) 0:25:20.273 ******** 2026-03-28 05:39:32.705470 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-28 05:39:32.705480 | orchestrator | 2026-03-28 05:39:32.705491 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:39:32.705502 | orchestrator | Saturday 28 March 2026 05:39:15 +0000 (0:00:01.169) 0:25:21.442 ******** 2026-03-28 05:39:32.705512 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705523 | orchestrator | 2026-03-28 05:39:32.705560 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:39:32.705573 | orchestrator | Saturday 28 March 2026 05:39:16 +0000 (0:00:01.728) 0:25:23.171 ******** 2026-03-28 05:39:32.705584 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:39:32.705595 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:39:32.705606 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:39:32.705616 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705628 | orchestrator | 2026-03-28 05:39:32.705639 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:39:32.705650 | orchestrator | Saturday 28 March 2026 05:39:17 +0000 (0:00:01.196) 0:25:24.368 ******** 2026-03-28 05:39:32.705661 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705672 | orchestrator | 2026-03-28 05:39:32.705683 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:39:32.705694 | orchestrator | Saturday 28 March 2026 05:39:19 +0000 (0:00:01.167) 0:25:25.536 ******** 2026-03-28 05:39:32.705705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705715 | orchestrator | 2026-03-28 05:39:32.705727 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:39:32.705737 | orchestrator | Saturday 28 March 2026 05:39:20 +0000 (0:00:01.217) 0:25:26.753 ******** 2026-03-28 05:39:32.705748 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705759 | orchestrator | 2026-03-28 05:39:32.705770 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:39:32.705781 | orchestrator | Saturday 28 March 2026 05:39:21 +0000 (0:00:01.173) 0:25:27.927 ******** 2026-03-28 05:39:32.705792 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705803 | orchestrator | 2026-03-28 05:39:32.705819 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:39:32.705831 | orchestrator | Saturday 28 March 2026 05:39:22 +0000 (0:00:01.234) 0:25:29.161 ******** 2026-03-28 05:39:32.705842 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.705853 | orchestrator | 2026-03-28 05:39:32.705864 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:39:32.705875 | orchestrator | Saturday 28 March 2026 05:39:23 +0000 (0:00:01.256) 0:25:30.418 ******** 2026-03-28 05:39:32.705886 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705897 | orchestrator | 2026-03-28 05:39:32.705915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:39:32.705933 | orchestrator | Saturday 28 March 2026 05:39:26 +0000 (0:00:02.632) 0:25:33.051 ******** 2026-03-28 05:39:32.705952 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:39:32.705970 | orchestrator | 2026-03-28 05:39:32.705989 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:39:32.706009 | orchestrator | Saturday 28 March 2026 05:39:27 +0000 (0:00:01.164) 0:25:34.215 ******** 2026-03-28 05:39:32.706114 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-28 05:39:32.706135 | orchestrator | 2026-03-28 05:39:32.706154 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:39:32.706187 | orchestrator | Saturday 28 March 2026 05:39:29 +0000 (0:00:01.343) 0:25:35.558 ******** 2026-03-28 05:39:32.706238 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.706259 | orchestrator | 2026-03-28 05:39:32.706279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:39:32.706297 | orchestrator | Saturday 28 March 2026 05:39:30 +0000 (0:00:01.192) 0:25:36.751 ******** 2026-03-28 05:39:32.706317 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.706336 | orchestrator | 2026-03-28 05:39:32.706355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:39:32.706375 | orchestrator | Saturday 28 March 2026 05:39:31 +0000 (0:00:01.204) 0:25:37.956 ******** 2026-03-28 05:39:32.706394 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:39:32.706412 | orchestrator | 2026-03-28 05:39:32.706447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:40:17.399969 | orchestrator | Saturday 28 March 2026 05:39:32 +0000 (0:00:01.165) 0:25:39.122 ******** 2026-03-28 05:40:17.400090 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400108 | orchestrator | 2026-03-28 05:40:17.400121 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:40:17.400133 | orchestrator | Saturday 28 March 2026 05:39:33 +0000 (0:00:01.203) 0:25:40.325 ******** 2026-03-28 05:40:17.400144 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400155 | orchestrator | 2026-03-28 05:40:17.400166 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:40:17.400178 | orchestrator | Saturday 28 March 2026 05:39:35 +0000 (0:00:01.191) 0:25:41.516 ******** 2026-03-28 05:40:17.400189 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400199 | orchestrator | 2026-03-28 05:40:17.400211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:40:17.400222 | orchestrator | Saturday 28 March 2026 05:39:36 +0000 (0:00:01.228) 0:25:42.745 ******** 2026-03-28 05:40:17.400233 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400244 | orchestrator | 2026-03-28 05:40:17.400255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:40:17.400266 | orchestrator | Saturday 28 March 2026 05:39:37 +0000 (0:00:01.195) 0:25:43.940 ******** 2026-03-28 05:40:17.400277 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400288 | orchestrator | 2026-03-28 05:40:17.400299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:40:17.400310 | orchestrator | Saturday 28 March 2026 05:39:38 +0000 (0:00:01.176) 0:25:45.116 ******** 2026-03-28 05:40:17.400321 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:40:17.400334 | orchestrator | 2026-03-28 05:40:17.400345 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:40:17.400356 | orchestrator | Saturday 28 March 2026 05:39:39 +0000 (0:00:01.151) 0:25:46.268 ******** 2026-03-28 05:40:17.400368 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-28 05:40:17.400380 | orchestrator | 2026-03-28 05:40:17.400391 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:40:17.400402 | orchestrator | Saturday 28 March 2026 05:39:40 +0000 (0:00:01.127) 0:25:47.396 ******** 2026-03-28 05:40:17.400413 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 05:40:17.400425 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 05:40:17.400436 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 05:40:17.400447 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 05:40:17.400458 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 05:40:17.400469 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 05:40:17.400480 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 05:40:17.400491 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:40:17.400502 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:40:17.400586 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:40:17.400601 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:40:17.400612 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:40:17.400623 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:40:17.400634 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:40:17.400645 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 05:40:17.400656 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 05:40:17.400667 | orchestrator | 2026-03-28 05:40:17.400693 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:40:17.400705 | orchestrator | Saturday 28 March 2026 05:39:47 +0000 (0:00:06.961) 0:25:54.357 ******** 2026-03-28 05:40:17.400716 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400727 | orchestrator | 2026-03-28 05:40:17.400738 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:40:17.400749 | orchestrator | Saturday 28 March 2026 05:39:49 +0000 (0:00:01.176) 0:25:55.534 ******** 2026-03-28 05:40:17.400760 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400771 | orchestrator | 2026-03-28 05:40:17.400782 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:40:17.400793 | orchestrator | Saturday 28 March 2026 05:39:50 +0000 (0:00:01.135) 0:25:56.669 ******** 2026-03-28 05:40:17.400804 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400816 | orchestrator | 2026-03-28 05:40:17.400827 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:40:17.400837 | orchestrator | Saturday 28 March 2026 05:39:51 +0000 (0:00:01.232) 0:25:57.902 ******** 2026-03-28 05:40:17.400848 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400859 | orchestrator | 2026-03-28 05:40:17.400870 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:40:17.400881 | orchestrator | Saturday 28 March 2026 05:39:52 +0000 (0:00:01.153) 0:25:59.055 ******** 2026-03-28 05:40:17.400892 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400903 | orchestrator | 2026-03-28 05:40:17.400914 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:40:17.400925 | orchestrator | Saturday 28 March 2026 05:39:53 +0000 (0:00:01.157) 0:26:00.213 ******** 2026-03-28 05:40:17.400936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400946 | orchestrator | 2026-03-28 05:40:17.400957 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:40:17.400968 | orchestrator | Saturday 28 March 2026 05:39:54 +0000 (0:00:01.159) 0:26:01.373 ******** 2026-03-28 05:40:17.400979 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.400990 | orchestrator | 2026-03-28 05:40:17.401020 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:40:17.401032 | orchestrator | Saturday 28 March 2026 05:39:56 +0000 (0:00:01.140) 0:26:02.514 ******** 2026-03-28 05:40:17.401043 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401053 | orchestrator | 2026-03-28 05:40:17.401064 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:40:17.401075 | orchestrator | Saturday 28 March 2026 05:39:57 +0000 (0:00:01.159) 0:26:03.673 ******** 2026-03-28 05:40:17.401086 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401097 | orchestrator | 2026-03-28 05:40:17.401108 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:40:17.401118 | orchestrator | Saturday 28 March 2026 05:39:58 +0000 (0:00:01.180) 0:26:04.854 ******** 2026-03-28 05:40:17.401129 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401140 | orchestrator | 2026-03-28 05:40:17.401151 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:40:17.401171 | orchestrator | Saturday 28 March 2026 05:39:59 +0000 (0:00:01.152) 0:26:06.007 ******** 2026-03-28 05:40:17.401182 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401193 | orchestrator | 2026-03-28 05:40:17.401204 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:40:17.401215 | orchestrator | Saturday 28 March 2026 05:40:00 +0000 (0:00:01.196) 0:26:07.203 ******** 2026-03-28 05:40:17.401225 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401236 | orchestrator | 2026-03-28 05:40:17.401247 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:40:17.401258 | orchestrator | Saturday 28 March 2026 05:40:01 +0000 (0:00:01.143) 0:26:08.347 ******** 2026-03-28 05:40:17.401268 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401279 | orchestrator | 2026-03-28 05:40:17.401290 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:40:17.401301 | orchestrator | Saturday 28 March 2026 05:40:03 +0000 (0:00:01.754) 0:26:10.102 ******** 2026-03-28 05:40:17.401312 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401322 | orchestrator | 2026-03-28 05:40:17.401333 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:40:17.401344 | orchestrator | Saturday 28 March 2026 05:40:04 +0000 (0:00:01.133) 0:26:11.236 ******** 2026-03-28 05:40:17.401355 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401365 | orchestrator | 2026-03-28 05:40:17.401376 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:40:17.401387 | orchestrator | Saturday 28 March 2026 05:40:06 +0000 (0:00:01.266) 0:26:12.503 ******** 2026-03-28 05:40:17.401398 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401408 | orchestrator | 2026-03-28 05:40:17.401419 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:40:17.401430 | orchestrator | Saturday 28 March 2026 05:40:07 +0000 (0:00:01.112) 0:26:13.616 ******** 2026-03-28 05:40:17.401441 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401452 | orchestrator | 2026-03-28 05:40:17.401463 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:40:17.401475 | orchestrator | Saturday 28 March 2026 05:40:08 +0000 (0:00:01.128) 0:26:14.744 ******** 2026-03-28 05:40:17.401486 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401497 | orchestrator | 2026-03-28 05:40:17.401508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:40:17.401541 | orchestrator | Saturday 28 March 2026 05:40:09 +0000 (0:00:01.152) 0:26:15.896 ******** 2026-03-28 05:40:17.401553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401564 | orchestrator | 2026-03-28 05:40:17.401575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:40:17.401592 | orchestrator | Saturday 28 March 2026 05:40:10 +0000 (0:00:01.140) 0:26:17.037 ******** 2026-03-28 05:40:17.401603 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401614 | orchestrator | 2026-03-28 05:40:17.401625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:40:17.401636 | orchestrator | Saturday 28 March 2026 05:40:11 +0000 (0:00:01.142) 0:26:18.179 ******** 2026-03-28 05:40:17.401647 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401658 | orchestrator | 2026-03-28 05:40:17.401668 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:40:17.401679 | orchestrator | Saturday 28 March 2026 05:40:12 +0000 (0:00:01.156) 0:26:19.336 ******** 2026-03-28 05:40:17.401690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:40:17.401701 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:40:17.401712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:40:17.401724 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401734 | orchestrator | 2026-03-28 05:40:17.401746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:40:17.401767 | orchestrator | Saturday 28 March 2026 05:40:14 +0000 (0:00:01.476) 0:26:20.812 ******** 2026-03-28 05:40:17.401778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:40:17.401789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:40:17.401800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:40:17.401811 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401822 | orchestrator | 2026-03-28 05:40:17.401833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:40:17.401844 | orchestrator | Saturday 28 March 2026 05:40:15 +0000 (0:00:01.491) 0:26:22.303 ******** 2026-03-28 05:40:17.401856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 05:40:17.401867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 05:40:17.401878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 05:40:17.401889 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:40:17.401904 | orchestrator | 2026-03-28 05:40:17.401922 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:41:31.923871 | orchestrator | Saturday 28 March 2026 05:40:17 +0000 (0:00:01.512) 0:26:23.816 ******** 2026-03-28 05:41:31.923992 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924009 | orchestrator | 2026-03-28 05:41:31.924022 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:41:31.924034 | orchestrator | Saturday 28 March 2026 05:40:18 +0000 (0:00:01.179) 0:26:24.995 ******** 2026-03-28 05:41:31.924045 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 05:41:31.924057 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924069 | orchestrator | 2026-03-28 05:41:31.924080 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:41:31.924091 | orchestrator | Saturday 28 March 2026 05:40:20 +0000 (0:00:01.483) 0:26:26.479 ******** 2026-03-28 05:41:31.924103 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.924114 | orchestrator | 2026-03-28 05:41:31.924125 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:41:31.924136 | orchestrator | Saturday 28 March 2026 05:40:21 +0000 (0:00:01.799) 0:26:28.279 ******** 2026-03-28 05:41:31.924147 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 05:41:31.924159 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:41:31.924171 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:41:31.924182 | orchestrator | 2026-03-28 05:41:31.924193 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:41:31.924204 | orchestrator | Saturday 28 March 2026 05:40:23 +0000 (0:00:01.683) 0:26:29.962 ******** 2026-03-28 05:41:31.924215 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-28 05:41:31.924226 | orchestrator | 2026-03-28 05:41:31.924237 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 05:41:31.924248 | orchestrator | Saturday 28 March 2026 05:40:25 +0000 (0:00:01.525) 0:26:31.487 ******** 2026-03-28 05:41:31.924259 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.924271 | orchestrator | 2026-03-28 05:41:31.924282 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 05:41:31.924293 | orchestrator | Saturday 28 March 2026 05:40:26 +0000 (0:00:01.480) 0:26:32.967 ******** 2026-03-28 05:41:31.924304 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924315 | orchestrator | 2026-03-28 05:41:31.924326 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 05:41:31.924337 | orchestrator | Saturday 28 March 2026 05:40:27 +0000 (0:00:01.195) 0:26:34.163 ******** 2026-03-28 05:41:31.924348 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 05:41:31.924359 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 05:41:31.924395 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 05:41:31.924407 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-28 05:41:31.924420 | orchestrator | 2026-03-28 05:41:31.924433 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 05:41:31.924446 | orchestrator | Saturday 28 March 2026 05:40:35 +0000 (0:00:07.589) 0:26:41.753 ******** 2026-03-28 05:41:31.924459 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.924471 | orchestrator | 2026-03-28 05:41:31.924485 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 05:41:31.924523 | orchestrator | Saturday 28 March 2026 05:40:36 +0000 (0:00:01.179) 0:26:42.932 ******** 2026-03-28 05:41:31.924536 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 05:41:31.924549 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 05:41:31.924561 | orchestrator | 2026-03-28 05:41:31.924574 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 05:41:31.924601 | orchestrator | Saturday 28 March 2026 05:40:39 +0000 (0:00:03.311) 0:26:46.243 ******** 2026-03-28 05:41:31.924614 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 05:41:31.924627 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 05:41:31.924640 | orchestrator | 2026-03-28 05:41:31.924653 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 05:41:31.924665 | orchestrator | Saturday 28 March 2026 05:40:41 +0000 (0:00:02.100) 0:26:48.344 ******** 2026-03-28 05:41:31.924677 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.924691 | orchestrator | 2026-03-28 05:41:31.924703 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 05:41:31.924716 | orchestrator | Saturday 28 March 2026 05:40:43 +0000 (0:00:01.525) 0:26:49.869 ******** 2026-03-28 05:41:31.924728 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924741 | orchestrator | 2026-03-28 05:41:31.924758 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:41:31.924778 | orchestrator | Saturday 28 March 2026 05:40:44 +0000 (0:00:01.203) 0:26:51.073 ******** 2026-03-28 05:41:31.924798 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924816 | orchestrator | 2026-03-28 05:41:31.924834 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:41:31.924853 | orchestrator | Saturday 28 March 2026 05:40:45 +0000 (0:00:01.152) 0:26:52.226 ******** 2026-03-28 05:41:31.924871 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-28 05:41:31.924891 | orchestrator | 2026-03-28 05:41:31.924909 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 05:41:31.924928 | orchestrator | Saturday 28 March 2026 05:40:47 +0000 (0:00:01.547) 0:26:53.774 ******** 2026-03-28 05:41:31.924946 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.924965 | orchestrator | 2026-03-28 05:41:31.924984 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 05:41:31.925002 | orchestrator | Saturday 28 March 2026 05:40:48 +0000 (0:00:01.165) 0:26:54.940 ******** 2026-03-28 05:41:31.925022 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.925043 | orchestrator | 2026-03-28 05:41:31.925063 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 05:41:31.925106 | orchestrator | Saturday 28 March 2026 05:40:49 +0000 (0:00:01.160) 0:26:56.100 ******** 2026-03-28 05:41:31.925126 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-28 05:41:31.925142 | orchestrator | 2026-03-28 05:41:31.925153 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 05:41:31.925164 | orchestrator | Saturday 28 March 2026 05:40:51 +0000 (0:00:01.534) 0:26:57.634 ******** 2026-03-28 05:41:31.925176 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.925187 | orchestrator | 2026-03-28 05:41:31.925198 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 05:41:31.925222 | orchestrator | Saturday 28 March 2026 05:40:53 +0000 (0:00:02.073) 0:26:59.708 ******** 2026-03-28 05:41:31.925234 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.925245 | orchestrator | 2026-03-28 05:41:31.925256 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 05:41:31.925267 | orchestrator | Saturday 28 March 2026 05:40:55 +0000 (0:00:02.061) 0:27:01.769 ******** 2026-03-28 05:41:31.925278 | orchestrator | ok: [testbed-node-0] 2026-03-28 05:41:31.925289 | orchestrator | 2026-03-28 05:41:31.925300 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 05:41:31.925311 | orchestrator | Saturday 28 March 2026 05:40:57 +0000 (0:00:02.537) 0:27:04.307 ******** 2026-03-28 05:41:31.925322 | orchestrator | changed: [testbed-node-0] 2026-03-28 05:41:31.925333 | orchestrator | 2026-03-28 05:41:31.925344 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:41:31.925355 | orchestrator | Saturday 28 March 2026 05:41:01 +0000 (0:00:03.967) 0:27:08.275 ******** 2026-03-28 05:41:31.925366 | orchestrator | skipping: [testbed-node-0] 2026-03-28 05:41:31.925377 | orchestrator | 2026-03-28 05:41:31.925388 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-28 05:41:31.925399 | orchestrator | 2026-03-28 05:41:31.925410 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:41:31.925421 | orchestrator | Saturday 28 March 2026 05:41:03 +0000 (0:00:01.326) 0:27:09.602 ******** 2026-03-28 05:41:31.925432 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:41:31.925443 | orchestrator | 2026-03-28 05:41:31.925454 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-28 05:41:31.925465 | orchestrator | Saturday 28 March 2026 05:41:15 +0000 (0:00:12.458) 0:27:22.061 ******** 2026-03-28 05:41:31.925476 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:41:31.925487 | orchestrator | 2026-03-28 05:41:31.925520 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:41:31.925532 | orchestrator | Saturday 28 March 2026 05:41:17 +0000 (0:00:02.098) 0:27:24.159 ******** 2026-03-28 05:41:31.925543 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-28 05:41:31.925554 | orchestrator | 2026-03-28 05:41:31.925565 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:41:31.925576 | orchestrator | Saturday 28 March 2026 05:41:18 +0000 (0:00:01.158) 0:27:25.318 ******** 2026-03-28 05:41:31.925596 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925615 | orchestrator | 2026-03-28 05:41:31.925634 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:41:31.925653 | orchestrator | Saturday 28 March 2026 05:41:20 +0000 (0:00:01.390) 0:27:26.708 ******** 2026-03-28 05:41:31.925671 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925689 | orchestrator | 2026-03-28 05:41:31.925708 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:41:31.925728 | orchestrator | Saturday 28 March 2026 05:41:21 +0000 (0:00:01.103) 0:27:27.812 ******** 2026-03-28 05:41:31.925746 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925764 | orchestrator | 2026-03-28 05:41:31.925782 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:41:31.925812 | orchestrator | Saturday 28 March 2026 05:41:22 +0000 (0:00:01.502) 0:27:29.315 ******** 2026-03-28 05:41:31.925831 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925843 | orchestrator | 2026-03-28 05:41:31.925854 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:41:31.925865 | orchestrator | Saturday 28 March 2026 05:41:24 +0000 (0:00:01.166) 0:27:30.481 ******** 2026-03-28 05:41:31.925876 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925887 | orchestrator | 2026-03-28 05:41:31.925898 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:41:31.925910 | orchestrator | Saturday 28 March 2026 05:41:25 +0000 (0:00:01.248) 0:27:31.730 ******** 2026-03-28 05:41:31.925921 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.925941 | orchestrator | 2026-03-28 05:41:31.925952 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:41:31.925963 | orchestrator | Saturday 28 March 2026 05:41:26 +0000 (0:00:01.206) 0:27:32.936 ******** 2026-03-28 05:41:31.925974 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:31.925985 | orchestrator | 2026-03-28 05:41:31.925996 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:41:31.926007 | orchestrator | Saturday 28 March 2026 05:41:27 +0000 (0:00:01.172) 0:27:34.109 ******** 2026-03-28 05:41:31.926094 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.926109 | orchestrator | 2026-03-28 05:41:31.926120 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:41:31.926131 | orchestrator | Saturday 28 March 2026 05:41:28 +0000 (0:00:01.245) 0:27:35.355 ******** 2026-03-28 05:41:31.926142 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:41:31.926153 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:41:31.926165 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:41:31.926176 | orchestrator | 2026-03-28 05:41:31.926187 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:41:31.926197 | orchestrator | Saturday 28 March 2026 05:41:30 +0000 (0:00:01.742) 0:27:37.097 ******** 2026-03-28 05:41:31.926208 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:31.926219 | orchestrator | 2026-03-28 05:41:31.926230 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:41:31.926254 | orchestrator | Saturday 28 March 2026 05:41:31 +0000 (0:00:01.245) 0:27:38.342 ******** 2026-03-28 05:41:56.946752 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:41:56.946858 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:41:56.946872 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:41:56.946882 | orchestrator | 2026-03-28 05:41:56.946892 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:41:56.946902 | orchestrator | Saturday 28 March 2026 05:41:34 +0000 (0:00:02.950) 0:27:41.293 ******** 2026-03-28 05:41:56.946912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:41:56.946922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:41:56.946931 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:41:56.946941 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.946950 | orchestrator | 2026-03-28 05:41:56.946960 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:41:56.946969 | orchestrator | Saturday 28 March 2026 05:41:36 +0000 (0:00:01.487) 0:27:42.781 ******** 2026-03-28 05:41:56.946979 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.946992 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.947001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.947010 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947019 | orchestrator | 2026-03-28 05:41:56.947028 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:41:56.947037 | orchestrator | Saturday 28 March 2026 05:41:38 +0000 (0:00:01.672) 0:27:44.454 ******** 2026-03-28 05:41:56.947068 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.947092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.947100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:41:56.947109 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947117 | orchestrator | 2026-03-28 05:41:56.947125 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:41:56.947133 | orchestrator | Saturday 28 March 2026 05:41:39 +0000 (0:00:01.198) 0:27:45.652 ******** 2026-03-28 05:41:56.947144 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:41:32.457057', 'end': '2026-03-28 05:41:32.512281', 'delta': '0:00:00.055224', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:41:56.947169 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:41:33.023112', 'end': '2026-03-28 05:41:33.080189', 'delta': '0:00:00.057077', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:41:56.947178 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:41:33.578822', 'end': '2026-03-28 05:41:33.630158', 'delta': '0:00:00.051336', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:41:56.947187 | orchestrator | 2026-03-28 05:41:56.947195 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:41:56.947209 | orchestrator | Saturday 28 March 2026 05:41:40 +0000 (0:00:01.260) 0:27:46.914 ******** 2026-03-28 05:41:56.947217 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:56.947226 | orchestrator | 2026-03-28 05:41:56.947234 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:41:56.947242 | orchestrator | Saturday 28 March 2026 05:41:41 +0000 (0:00:01.284) 0:27:48.198 ******** 2026-03-28 05:41:56.947249 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947257 | orchestrator | 2026-03-28 05:41:56.947265 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:41:56.947273 | orchestrator | Saturday 28 March 2026 05:41:43 +0000 (0:00:01.270) 0:27:49.469 ******** 2026-03-28 05:41:56.947281 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:56.947289 | orchestrator | 2026-03-28 05:41:56.947297 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:41:56.947305 | orchestrator | Saturday 28 March 2026 05:41:44 +0000 (0:00:01.187) 0:27:50.657 ******** 2026-03-28 05:41:56.947313 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:41:56.947321 | orchestrator | 2026-03-28 05:41:56.947328 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:41:56.947336 | orchestrator | Saturday 28 March 2026 05:41:46 +0000 (0:00:02.007) 0:27:52.665 ******** 2026-03-28 05:41:56.947344 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:41:56.947352 | orchestrator | 2026-03-28 05:41:56.947360 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:41:56.947372 | orchestrator | Saturday 28 March 2026 05:41:47 +0000 (0:00:01.187) 0:27:53.852 ******** 2026-03-28 05:41:56.947380 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947388 | orchestrator | 2026-03-28 05:41:56.947396 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:41:56.947404 | orchestrator | Saturday 28 March 2026 05:41:48 +0000 (0:00:01.222) 0:27:55.074 ******** 2026-03-28 05:41:56.947412 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947420 | orchestrator | 2026-03-28 05:41:56.947427 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:41:56.947435 | orchestrator | Saturday 28 March 2026 05:41:49 +0000 (0:00:01.283) 0:27:56.358 ******** 2026-03-28 05:41:56.947443 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947451 | orchestrator | 2026-03-28 05:41:56.947459 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:41:56.947467 | orchestrator | Saturday 28 March 2026 05:41:51 +0000 (0:00:01.174) 0:27:57.533 ******** 2026-03-28 05:41:56.947475 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947482 | orchestrator | 2026-03-28 05:41:56.947511 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:41:56.947520 | orchestrator | Saturday 28 March 2026 05:41:52 +0000 (0:00:01.186) 0:27:58.719 ******** 2026-03-28 05:41:56.947528 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947535 | orchestrator | 2026-03-28 05:41:56.947543 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:41:56.947551 | orchestrator | Saturday 28 March 2026 05:41:53 +0000 (0:00:01.143) 0:27:59.862 ******** 2026-03-28 05:41:56.947559 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947567 | orchestrator | 2026-03-28 05:41:56.947575 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:41:56.947583 | orchestrator | Saturday 28 March 2026 05:41:54 +0000 (0:00:01.190) 0:28:01.053 ******** 2026-03-28 05:41:56.947591 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947599 | orchestrator | 2026-03-28 05:41:56.947607 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:41:56.947614 | orchestrator | Saturday 28 March 2026 05:41:55 +0000 (0:00:01.146) 0:28:02.199 ******** 2026-03-28 05:41:56.947623 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:41:56.947637 | orchestrator | 2026-03-28 05:41:56.947645 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:41:56.947659 | orchestrator | Saturday 28 March 2026 05:41:56 +0000 (0:00:01.166) 0:28:03.366 ******** 2026-03-28 05:42:00.655935 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:00.656037 | orchestrator | 2026-03-28 05:42:00.656053 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:42:00.656066 | orchestrator | Saturday 28 March 2026 05:41:58 +0000 (0:00:01.136) 0:28:04.502 ******** 2026-03-28 05:42:00.656079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:42:00.656151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:42:00.656248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:42:00.656272 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:00.656283 | orchestrator | 2026-03-28 05:42:00.656295 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:42:00.656307 | orchestrator | Saturday 28 March 2026 05:41:59 +0000 (0:00:01.312) 0:28:05.815 ******** 2026-03-28 05:42:00.656325 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:00.656339 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:00.656368 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.391952 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392068 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392085 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392115 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392151 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b8082e3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1', 'scsi-SQEMU_QEMU_HARDDISK_1b8082e3-0236-4677-af0b-8478c2d5c241-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392187 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392200 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:42:11.392212 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:11.392225 | orchestrator | 2026-03-28 05:42:11.392238 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:42:11.392255 | orchestrator | Saturday 28 March 2026 05:42:00 +0000 (0:00:01.266) 0:28:07.082 ******** 2026-03-28 05:42:11.392267 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:11.392279 | orchestrator | 2026-03-28 05:42:11.392290 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:42:11.392301 | orchestrator | Saturday 28 March 2026 05:42:02 +0000 (0:00:01.487) 0:28:08.569 ******** 2026-03-28 05:42:11.392312 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:11.392323 | orchestrator | 2026-03-28 05:42:11.392334 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:42:11.392353 | orchestrator | Saturday 28 March 2026 05:42:03 +0000 (0:00:01.148) 0:28:09.718 ******** 2026-03-28 05:42:11.392364 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:11.392375 | orchestrator | 2026-03-28 05:42:11.392386 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:42:11.392397 | orchestrator | Saturday 28 March 2026 05:42:04 +0000 (0:00:01.550) 0:28:11.269 ******** 2026-03-28 05:42:11.392408 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:11.392419 | orchestrator | 2026-03-28 05:42:11.392430 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:42:11.392441 | orchestrator | Saturday 28 March 2026 05:42:06 +0000 (0:00:01.176) 0:28:12.445 ******** 2026-03-28 05:42:11.392452 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:11.392465 | orchestrator | 2026-03-28 05:42:11.392478 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:42:11.392516 | orchestrator | Saturday 28 March 2026 05:42:07 +0000 (0:00:01.255) 0:28:13.701 ******** 2026-03-28 05:42:11.392530 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:11.392542 | orchestrator | 2026-03-28 05:42:11.392555 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:42:11.392567 | orchestrator | Saturday 28 March 2026 05:42:08 +0000 (0:00:01.158) 0:28:14.860 ******** 2026-03-28 05:42:11.392579 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 05:42:11.392593 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:42:11.392605 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 05:42:11.392616 | orchestrator | 2026-03-28 05:42:11.392627 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:42:11.392638 | orchestrator | Saturday 28 March 2026 05:42:10 +0000 (0:00:01.728) 0:28:16.588 ******** 2026-03-28 05:42:11.392649 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 05:42:11.392660 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 05:42:11.392671 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 05:42:11.392682 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:11.392693 | orchestrator | 2026-03-28 05:42:11.392711 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:42:49.596416 | orchestrator | Saturday 28 March 2026 05:42:11 +0000 (0:00:01.225) 0:28:17.814 ******** 2026-03-28 05:42:49.596607 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.596633 | orchestrator | 2026-03-28 05:42:49.596648 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:42:49.596662 | orchestrator | Saturday 28 March 2026 05:42:12 +0000 (0:00:01.209) 0:28:19.024 ******** 2026-03-28 05:42:49.596676 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:42:49.596692 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:42:49.596706 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:42:49.596720 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:42:49.596734 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:42:49.596749 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:42:49.596762 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:42:49.596776 | orchestrator | 2026-03-28 05:42:49.596790 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:42:49.596803 | orchestrator | Saturday 28 March 2026 05:42:15 +0000 (0:00:02.417) 0:28:21.441 ******** 2026-03-28 05:42:49.596819 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:42:49.596835 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:42:49.596896 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:42:49.596913 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:42:49.596927 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:42:49.596941 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:42:49.596955 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:42:49.596965 | orchestrator | 2026-03-28 05:42:49.596975 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:42:49.596985 | orchestrator | Saturday 28 March 2026 05:42:17 +0000 (0:00:02.540) 0:28:23.982 ******** 2026-03-28 05:42:49.596994 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-28 05:42:49.597005 | orchestrator | 2026-03-28 05:42:49.597014 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:42:49.597023 | orchestrator | Saturday 28 March 2026 05:42:18 +0000 (0:00:01.274) 0:28:25.257 ******** 2026-03-28 05:42:49.597032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-28 05:42:49.597041 | orchestrator | 2026-03-28 05:42:49.597064 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:42:49.597073 | orchestrator | Saturday 28 March 2026 05:42:20 +0000 (0:00:01.378) 0:28:26.635 ******** 2026-03-28 05:42:49.597082 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597093 | orchestrator | 2026-03-28 05:42:49.597107 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:42:49.597121 | orchestrator | Saturday 28 March 2026 05:42:21 +0000 (0:00:01.537) 0:28:28.173 ******** 2026-03-28 05:42:49.597135 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597147 | orchestrator | 2026-03-28 05:42:49.597160 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:42:49.597174 | orchestrator | Saturday 28 March 2026 05:42:22 +0000 (0:00:01.204) 0:28:29.378 ******** 2026-03-28 05:42:49.597188 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597202 | orchestrator | 2026-03-28 05:42:49.597217 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:42:49.597231 | orchestrator | Saturday 28 March 2026 05:42:24 +0000 (0:00:01.116) 0:28:30.494 ******** 2026-03-28 05:42:49.597246 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597260 | orchestrator | 2026-03-28 05:42:49.597274 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:42:49.597289 | orchestrator | Saturday 28 March 2026 05:42:25 +0000 (0:00:01.124) 0:28:31.618 ******** 2026-03-28 05:42:49.597302 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597314 | orchestrator | 2026-03-28 05:42:49.597322 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:42:49.597330 | orchestrator | Saturday 28 March 2026 05:42:26 +0000 (0:00:01.563) 0:28:33.182 ******** 2026-03-28 05:42:49.597337 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597345 | orchestrator | 2026-03-28 05:42:49.597353 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:42:49.597361 | orchestrator | Saturday 28 March 2026 05:42:27 +0000 (0:00:01.117) 0:28:34.300 ******** 2026-03-28 05:42:49.597372 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597384 | orchestrator | 2026-03-28 05:42:49.597397 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:42:49.597410 | orchestrator | Saturday 28 March 2026 05:42:29 +0000 (0:00:01.157) 0:28:35.457 ******** 2026-03-28 05:42:49.597423 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597437 | orchestrator | 2026-03-28 05:42:49.597445 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:42:49.597453 | orchestrator | Saturday 28 March 2026 05:42:30 +0000 (0:00:01.644) 0:28:37.101 ******** 2026-03-28 05:42:49.597471 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597504 | orchestrator | 2026-03-28 05:42:49.597513 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:42:49.597541 | orchestrator | Saturday 28 March 2026 05:42:32 +0000 (0:00:01.603) 0:28:38.704 ******** 2026-03-28 05:42:49.597549 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597557 | orchestrator | 2026-03-28 05:42:49.597565 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:42:49.597573 | orchestrator | Saturday 28 March 2026 05:42:33 +0000 (0:00:00.871) 0:28:39.576 ******** 2026-03-28 05:42:49.597581 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597589 | orchestrator | 2026-03-28 05:42:49.597597 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:42:49.597604 | orchestrator | Saturday 28 March 2026 05:42:33 +0000 (0:00:00.789) 0:28:40.366 ******** 2026-03-28 05:42:49.597612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597620 | orchestrator | 2026-03-28 05:42:49.597628 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:42:49.597636 | orchestrator | Saturday 28 March 2026 05:42:34 +0000 (0:00:00.836) 0:28:41.202 ******** 2026-03-28 05:42:49.597644 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597651 | orchestrator | 2026-03-28 05:42:49.597659 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:42:49.597667 | orchestrator | Saturday 28 March 2026 05:42:35 +0000 (0:00:00.934) 0:28:42.136 ******** 2026-03-28 05:42:49.597675 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597683 | orchestrator | 2026-03-28 05:42:49.597691 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:42:49.597699 | orchestrator | Saturday 28 March 2026 05:42:36 +0000 (0:00:00.804) 0:28:42.941 ******** 2026-03-28 05:42:49.597706 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597714 | orchestrator | 2026-03-28 05:42:49.597722 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:42:49.597730 | orchestrator | Saturday 28 March 2026 05:42:37 +0000 (0:00:00.831) 0:28:43.772 ******** 2026-03-28 05:42:49.597738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597745 | orchestrator | 2026-03-28 05:42:49.597753 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:42:49.597761 | orchestrator | Saturday 28 March 2026 05:42:38 +0000 (0:00:00.801) 0:28:44.574 ******** 2026-03-28 05:42:49.597769 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597777 | orchestrator | 2026-03-28 05:42:49.597785 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:42:49.597793 | orchestrator | Saturday 28 March 2026 05:42:38 +0000 (0:00:00.834) 0:28:45.409 ******** 2026-03-28 05:42:49.597800 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597808 | orchestrator | 2026-03-28 05:42:49.597816 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:42:49.597824 | orchestrator | Saturday 28 March 2026 05:42:39 +0000 (0:00:00.803) 0:28:46.213 ******** 2026-03-28 05:42:49.597832 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:42:49.597840 | orchestrator | 2026-03-28 05:42:49.597848 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:42:49.597856 | orchestrator | Saturday 28 March 2026 05:42:40 +0000 (0:00:00.817) 0:28:47.031 ******** 2026-03-28 05:42:49.597864 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597872 | orchestrator | 2026-03-28 05:42:49.597880 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:42:49.597887 | orchestrator | Saturday 28 March 2026 05:42:41 +0000 (0:00:00.823) 0:28:47.854 ******** 2026-03-28 05:42:49.597902 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597910 | orchestrator | 2026-03-28 05:42:49.597918 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:42:49.597926 | orchestrator | Saturday 28 March 2026 05:42:42 +0000 (0:00:00.814) 0:28:48.669 ******** 2026-03-28 05:42:49.597939 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597947 | orchestrator | 2026-03-28 05:42:49.597955 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:42:49.597963 | orchestrator | Saturday 28 March 2026 05:42:43 +0000 (0:00:00.809) 0:28:49.478 ******** 2026-03-28 05:42:49.597970 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.597978 | orchestrator | 2026-03-28 05:42:49.597986 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:42:49.597994 | orchestrator | Saturday 28 March 2026 05:42:43 +0000 (0:00:00.763) 0:28:50.241 ******** 2026-03-28 05:42:49.598002 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598010 | orchestrator | 2026-03-28 05:42:49.598064 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:42:49.598073 | orchestrator | Saturday 28 March 2026 05:42:44 +0000 (0:00:00.852) 0:28:51.094 ******** 2026-03-28 05:42:49.598081 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598088 | orchestrator | 2026-03-28 05:42:49.598096 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:42:49.598104 | orchestrator | Saturday 28 March 2026 05:42:45 +0000 (0:00:00.834) 0:28:51.929 ******** 2026-03-28 05:42:49.598112 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598120 | orchestrator | 2026-03-28 05:42:49.598127 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:42:49.598135 | orchestrator | Saturday 28 March 2026 05:42:46 +0000 (0:00:00.964) 0:28:52.894 ******** 2026-03-28 05:42:49.598143 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598151 | orchestrator | 2026-03-28 05:42:49.598159 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:42:49.598167 | orchestrator | Saturday 28 March 2026 05:42:47 +0000 (0:00:00.806) 0:28:53.701 ******** 2026-03-28 05:42:49.598175 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598182 | orchestrator | 2026-03-28 05:42:49.598199 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:42:49.598208 | orchestrator | Saturday 28 March 2026 05:42:48 +0000 (0:00:00.768) 0:28:54.470 ******** 2026-03-28 05:42:49.598215 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598223 | orchestrator | 2026-03-28 05:42:49.598231 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:42:49.598239 | orchestrator | Saturday 28 March 2026 05:42:48 +0000 (0:00:00.787) 0:28:55.257 ******** 2026-03-28 05:42:49.598247 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:42:49.598255 | orchestrator | 2026-03-28 05:42:49.598269 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:43:35.951299 | orchestrator | Saturday 28 March 2026 05:42:49 +0000 (0:00:00.759) 0:28:56.017 ******** 2026-03-28 05:43:35.951419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.951437 | orchestrator | 2026-03-28 05:43:35.951450 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:43:35.951462 | orchestrator | Saturday 28 March 2026 05:42:50 +0000 (0:00:00.810) 0:28:56.827 ******** 2026-03-28 05:43:35.951540 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.951555 | orchestrator | 2026-03-28 05:43:35.951575 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:43:35.951588 | orchestrator | Saturday 28 March 2026 05:42:52 +0000 (0:00:01.627) 0:28:58.455 ******** 2026-03-28 05:43:35.951600 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.951612 | orchestrator | 2026-03-28 05:43:35.951623 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:43:35.951635 | orchestrator | Saturday 28 March 2026 05:42:54 +0000 (0:00:02.060) 0:29:00.515 ******** 2026-03-28 05:43:35.951646 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-28 05:43:35.951659 | orchestrator | 2026-03-28 05:43:35.951679 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:43:35.951716 | orchestrator | Saturday 28 March 2026 05:42:55 +0000 (0:00:01.161) 0:29:01.677 ******** 2026-03-28 05:43:35.951727 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.951738 | orchestrator | 2026-03-28 05:43:35.951749 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:43:35.951760 | orchestrator | Saturday 28 March 2026 05:42:56 +0000 (0:00:01.177) 0:29:02.854 ******** 2026-03-28 05:43:35.951771 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.951782 | orchestrator | 2026-03-28 05:43:35.951793 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:43:35.951804 | orchestrator | Saturday 28 March 2026 05:42:57 +0000 (0:00:01.164) 0:29:04.019 ******** 2026-03-28 05:43:35.951815 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:43:35.951826 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:43:35.951840 | orchestrator | 2026-03-28 05:43:35.951852 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:43:35.951865 | orchestrator | Saturday 28 March 2026 05:42:59 +0000 (0:00:01.939) 0:29:05.959 ******** 2026-03-28 05:43:35.951877 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.951891 | orchestrator | 2026-03-28 05:43:35.951904 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:43:35.951916 | orchestrator | Saturday 28 March 2026 05:43:01 +0000 (0:00:01.514) 0:29:07.473 ******** 2026-03-28 05:43:35.951928 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.951941 | orchestrator | 2026-03-28 05:43:35.951952 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:43:35.951965 | orchestrator | Saturday 28 March 2026 05:43:02 +0000 (0:00:01.188) 0:29:08.661 ******** 2026-03-28 05:43:35.951977 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.951989 | orchestrator | 2026-03-28 05:43:35.952017 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:43:35.952030 | orchestrator | Saturday 28 March 2026 05:43:03 +0000 (0:00:00.829) 0:29:09.491 ******** 2026-03-28 05:43:35.952042 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952054 | orchestrator | 2026-03-28 05:43:35.952067 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:43:35.952080 | orchestrator | Saturday 28 March 2026 05:43:03 +0000 (0:00:00.789) 0:29:10.280 ******** 2026-03-28 05:43:35.952092 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-28 05:43:35.952104 | orchestrator | 2026-03-28 05:43:35.952117 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:43:35.952129 | orchestrator | Saturday 28 March 2026 05:43:04 +0000 (0:00:01.138) 0:29:11.419 ******** 2026-03-28 05:43:35.952141 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.952154 | orchestrator | 2026-03-28 05:43:35.952165 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:43:35.952176 | orchestrator | Saturday 28 March 2026 05:43:06 +0000 (0:00:01.697) 0:29:13.117 ******** 2026-03-28 05:43:35.952187 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:43:35.952198 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:43:35.952209 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:43:35.952220 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952230 | orchestrator | 2026-03-28 05:43:35.952241 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:43:35.952252 | orchestrator | Saturday 28 March 2026 05:43:07 +0000 (0:00:01.181) 0:29:14.298 ******** 2026-03-28 05:43:35.952263 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952274 | orchestrator | 2026-03-28 05:43:35.952285 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:43:35.952296 | orchestrator | Saturday 28 March 2026 05:43:09 +0000 (0:00:01.206) 0:29:15.505 ******** 2026-03-28 05:43:35.952315 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952326 | orchestrator | 2026-03-28 05:43:35.952336 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:43:35.952347 | orchestrator | Saturday 28 March 2026 05:43:10 +0000 (0:00:01.200) 0:29:16.705 ******** 2026-03-28 05:43:35.952358 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952369 | orchestrator | 2026-03-28 05:43:35.952380 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:43:35.952391 | orchestrator | Saturday 28 March 2026 05:43:11 +0000 (0:00:01.138) 0:29:17.844 ******** 2026-03-28 05:43:35.952402 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952412 | orchestrator | 2026-03-28 05:43:35.952441 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:43:35.952453 | orchestrator | Saturday 28 March 2026 05:43:12 +0000 (0:00:01.250) 0:29:19.094 ******** 2026-03-28 05:43:35.952464 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952508 | orchestrator | 2026-03-28 05:43:35.952519 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:43:35.952531 | orchestrator | Saturday 28 March 2026 05:43:13 +0000 (0:00:00.963) 0:29:20.057 ******** 2026-03-28 05:43:35.952542 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.952553 | orchestrator | 2026-03-28 05:43:35.952564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:43:35.952575 | orchestrator | Saturday 28 March 2026 05:43:15 +0000 (0:00:02.163) 0:29:22.220 ******** 2026-03-28 05:43:35.952586 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.952597 | orchestrator | 2026-03-28 05:43:35.952608 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:43:35.952619 | orchestrator | Saturday 28 March 2026 05:43:16 +0000 (0:00:00.812) 0:29:23.033 ******** 2026-03-28 05:43:35.952629 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-28 05:43:35.952640 | orchestrator | 2026-03-28 05:43:35.952651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:43:35.952662 | orchestrator | Saturday 28 March 2026 05:43:17 +0000 (0:00:01.187) 0:29:24.220 ******** 2026-03-28 05:43:35.952673 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952684 | orchestrator | 2026-03-28 05:43:35.952695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:43:35.952706 | orchestrator | Saturday 28 March 2026 05:43:18 +0000 (0:00:01.150) 0:29:25.371 ******** 2026-03-28 05:43:35.952717 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952728 | orchestrator | 2026-03-28 05:43:35.952739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:43:35.952750 | orchestrator | Saturday 28 March 2026 05:43:20 +0000 (0:00:01.191) 0:29:26.562 ******** 2026-03-28 05:43:35.952760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952771 | orchestrator | 2026-03-28 05:43:35.952782 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:43:35.952793 | orchestrator | Saturday 28 March 2026 05:43:21 +0000 (0:00:01.238) 0:29:27.801 ******** 2026-03-28 05:43:35.952804 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952815 | orchestrator | 2026-03-28 05:43:35.952826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:43:35.952837 | orchestrator | Saturday 28 March 2026 05:43:22 +0000 (0:00:01.163) 0:29:28.964 ******** 2026-03-28 05:43:35.952848 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952859 | orchestrator | 2026-03-28 05:43:35.952870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:43:35.952880 | orchestrator | Saturday 28 March 2026 05:43:23 +0000 (0:00:01.216) 0:29:30.181 ******** 2026-03-28 05:43:35.952891 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952902 | orchestrator | 2026-03-28 05:43:35.952913 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:43:35.952932 | orchestrator | Saturday 28 March 2026 05:43:24 +0000 (0:00:01.139) 0:29:31.321 ******** 2026-03-28 05:43:35.952949 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.952960 | orchestrator | 2026-03-28 05:43:35.952971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:43:35.952982 | orchestrator | Saturday 28 March 2026 05:43:26 +0000 (0:00:01.195) 0:29:32.516 ******** 2026-03-28 05:43:35.952993 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:43:35.953004 | orchestrator | 2026-03-28 05:43:35.953015 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:43:35.953026 | orchestrator | Saturday 28 March 2026 05:43:27 +0000 (0:00:01.273) 0:29:33.790 ******** 2026-03-28 05:43:35.953037 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:43:35.953048 | orchestrator | 2026-03-28 05:43:35.953059 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:43:35.953070 | orchestrator | Saturday 28 March 2026 05:43:28 +0000 (0:00:01.045) 0:29:34.835 ******** 2026-03-28 05:43:35.953081 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-28 05:43:35.953092 | orchestrator | 2026-03-28 05:43:35.953103 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:43:35.953114 | orchestrator | Saturday 28 March 2026 05:43:29 +0000 (0:00:01.161) 0:29:35.997 ******** 2026-03-28 05:43:35.953125 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 05:43:35.953136 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 05:43:35.953147 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 05:43:35.953158 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 05:43:35.953169 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 05:43:35.953180 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 05:43:35.953191 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 05:43:35.953202 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:43:35.953213 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:43:35.953224 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:43:35.953235 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:43:35.953246 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:43:35.953257 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:43:35.953268 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:43:35.953279 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 05:43:35.953290 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 05:43:35.953301 | orchestrator | 2026-03-28 05:43:35.953318 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:44:18.291394 | orchestrator | Saturday 28 March 2026 05:43:35 +0000 (0:00:06.369) 0:29:42.366 ******** 2026-03-28 05:44:18.291578 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291599 | orchestrator | 2026-03-28 05:44:18.291612 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:44:18.291624 | orchestrator | Saturday 28 March 2026 05:43:36 +0000 (0:00:00.792) 0:29:43.158 ******** 2026-03-28 05:44:18.291635 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291646 | orchestrator | 2026-03-28 05:44:18.291658 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:44:18.291669 | orchestrator | Saturday 28 March 2026 05:43:37 +0000 (0:00:00.783) 0:29:43.942 ******** 2026-03-28 05:44:18.291680 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291692 | orchestrator | 2026-03-28 05:44:18.291703 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:44:18.291714 | orchestrator | Saturday 28 March 2026 05:43:38 +0000 (0:00:00.800) 0:29:44.743 ******** 2026-03-28 05:44:18.291751 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291763 | orchestrator | 2026-03-28 05:44:18.291774 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:44:18.291786 | orchestrator | Saturday 28 March 2026 05:43:39 +0000 (0:00:00.823) 0:29:45.566 ******** 2026-03-28 05:44:18.291797 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291808 | orchestrator | 2026-03-28 05:44:18.291820 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:44:18.291831 | orchestrator | Saturday 28 March 2026 05:43:39 +0000 (0:00:00.814) 0:29:46.381 ******** 2026-03-28 05:44:18.291878 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291889 | orchestrator | 2026-03-28 05:44:18.291901 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:44:18.291916 | orchestrator | Saturday 28 March 2026 05:43:40 +0000 (0:00:00.857) 0:29:47.240 ******** 2026-03-28 05:44:18.291929 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291941 | orchestrator | 2026-03-28 05:44:18.291955 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:44:18.291968 | orchestrator | Saturday 28 March 2026 05:43:41 +0000 (0:00:00.819) 0:29:48.059 ******** 2026-03-28 05:44:18.291980 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.291994 | orchestrator | 2026-03-28 05:44:18.292006 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:44:18.292019 | orchestrator | Saturday 28 March 2026 05:43:42 +0000 (0:00:00.784) 0:29:48.844 ******** 2026-03-28 05:44:18.292032 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292045 | orchestrator | 2026-03-28 05:44:18.292058 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:44:18.292071 | orchestrator | Saturday 28 March 2026 05:43:43 +0000 (0:00:00.821) 0:29:49.666 ******** 2026-03-28 05:44:18.292084 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292097 | orchestrator | 2026-03-28 05:44:18.292109 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:44:18.292136 | orchestrator | Saturday 28 March 2026 05:43:44 +0000 (0:00:00.906) 0:29:50.572 ******** 2026-03-28 05:44:18.292148 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292161 | orchestrator | 2026-03-28 05:44:18.292174 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:44:18.292187 | orchestrator | Saturday 28 March 2026 05:43:44 +0000 (0:00:00.821) 0:29:51.394 ******** 2026-03-28 05:44:18.292199 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292211 | orchestrator | 2026-03-28 05:44:18.292224 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:44:18.292237 | orchestrator | Saturday 28 March 2026 05:43:45 +0000 (0:00:00.837) 0:29:52.231 ******** 2026-03-28 05:44:18.292250 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292263 | orchestrator | 2026-03-28 05:44:18.292275 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:44:18.292286 | orchestrator | Saturday 28 March 2026 05:43:46 +0000 (0:00:00.894) 0:29:53.126 ******** 2026-03-28 05:44:18.292297 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292308 | orchestrator | 2026-03-28 05:44:18.292319 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:44:18.292330 | orchestrator | Saturday 28 March 2026 05:43:47 +0000 (0:00:00.870) 0:29:53.996 ******** 2026-03-28 05:44:18.292341 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292352 | orchestrator | 2026-03-28 05:44:18.292363 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:44:18.292374 | orchestrator | Saturday 28 March 2026 05:43:48 +0000 (0:00:00.908) 0:29:54.904 ******** 2026-03-28 05:44:18.292385 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292396 | orchestrator | 2026-03-28 05:44:18.292407 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:44:18.292426 | orchestrator | Saturday 28 March 2026 05:43:49 +0000 (0:00:00.870) 0:29:55.775 ******** 2026-03-28 05:44:18.292437 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292448 | orchestrator | 2026-03-28 05:44:18.292478 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:44:18.292491 | orchestrator | Saturday 28 March 2026 05:43:50 +0000 (0:00:00.803) 0:29:56.578 ******** 2026-03-28 05:44:18.292503 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292514 | orchestrator | 2026-03-28 05:44:18.292525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:44:18.292536 | orchestrator | Saturday 28 March 2026 05:43:50 +0000 (0:00:00.814) 0:29:57.393 ******** 2026-03-28 05:44:18.292547 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292558 | orchestrator | 2026-03-28 05:44:18.292569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:44:18.292580 | orchestrator | Saturday 28 March 2026 05:43:51 +0000 (0:00:00.788) 0:29:58.181 ******** 2026-03-28 05:44:18.292591 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292603 | orchestrator | 2026-03-28 05:44:18.292632 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:44:18.292644 | orchestrator | Saturday 28 March 2026 05:43:52 +0000 (0:00:00.772) 0:29:58.954 ******** 2026-03-28 05:44:18.292655 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292667 | orchestrator | 2026-03-28 05:44:18.292678 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:44:18.292689 | orchestrator | Saturday 28 March 2026 05:43:53 +0000 (0:00:00.773) 0:29:59.727 ******** 2026-03-28 05:44:18.292700 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:44:18.292711 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:44:18.292722 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:44:18.292733 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292744 | orchestrator | 2026-03-28 05:44:18.292756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:44:18.292767 | orchestrator | Saturday 28 March 2026 05:43:54 +0000 (0:00:01.464) 0:30:01.192 ******** 2026-03-28 05:44:18.292778 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:44:18.292789 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:44:18.292800 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:44:18.292811 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292822 | orchestrator | 2026-03-28 05:44:18.292833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:44:18.292844 | orchestrator | Saturday 28 March 2026 05:43:56 +0000 (0:00:01.653) 0:30:02.845 ******** 2026-03-28 05:44:18.292855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 05:44:18.292866 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 05:44:18.292877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 05:44:18.292888 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292899 | orchestrator | 2026-03-28 05:44:18.292910 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:44:18.292921 | orchestrator | Saturday 28 March 2026 05:43:57 +0000 (0:00:01.123) 0:30:03.969 ******** 2026-03-28 05:44:18.292932 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.292943 | orchestrator | 2026-03-28 05:44:18.292955 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:44:18.292966 | orchestrator | Saturday 28 March 2026 05:43:58 +0000 (0:00:00.786) 0:30:04.756 ******** 2026-03-28 05:44:18.292977 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 05:44:18.292988 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.293006 | orchestrator | 2026-03-28 05:44:18.293018 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:44:18.293029 | orchestrator | Saturday 28 March 2026 05:43:59 +0000 (0:00:00.962) 0:30:05.718 ******** 2026-03-28 05:44:18.293040 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:44:18.293051 | orchestrator | 2026-03-28 05:44:18.293062 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:44:18.293078 | orchestrator | Saturday 28 March 2026 05:44:00 +0000 (0:00:01.443) 0:30:07.162 ******** 2026-03-28 05:44:18.293090 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:44:18.293101 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 05:44:18.293112 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:44:18.293123 | orchestrator | 2026-03-28 05:44:18.293134 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:44:18.293145 | orchestrator | Saturday 28 March 2026 05:44:02 +0000 (0:00:01.413) 0:30:08.575 ******** 2026-03-28 05:44:18.293156 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-28 05:44:18.293167 | orchestrator | 2026-03-28 05:44:18.293178 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 05:44:18.293189 | orchestrator | Saturday 28 March 2026 05:44:03 +0000 (0:00:01.150) 0:30:09.726 ******** 2026-03-28 05:44:18.293200 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:44:18.293211 | orchestrator | 2026-03-28 05:44:18.293222 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 05:44:18.293233 | orchestrator | Saturday 28 March 2026 05:44:04 +0000 (0:00:01.492) 0:30:11.219 ******** 2026-03-28 05:44:18.293244 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:44:18.293255 | orchestrator | 2026-03-28 05:44:18.293266 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 05:44:18.293277 | orchestrator | Saturday 28 March 2026 05:44:05 +0000 (0:00:01.160) 0:30:12.379 ******** 2026-03-28 05:44:18.293288 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:44:18.293299 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:44:18.293310 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:44:18.293321 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-28 05:44:18.293332 | orchestrator | 2026-03-28 05:44:18.293343 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 05:44:18.293354 | orchestrator | Saturday 28 March 2026 05:44:13 +0000 (0:00:07.746) 0:30:20.126 ******** 2026-03-28 05:44:18.293365 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:44:18.293376 | orchestrator | 2026-03-28 05:44:18.293387 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 05:44:18.293398 | orchestrator | Saturday 28 March 2026 05:44:15 +0000 (0:00:01.351) 0:30:21.478 ******** 2026-03-28 05:44:18.293409 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 05:44:18.293420 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 05:44:18.293430 | orchestrator | 2026-03-28 05:44:18.293448 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 05:45:06.116742 | orchestrator | Saturday 28 March 2026 05:44:18 +0000 (0:00:03.232) 0:30:24.711 ******** 2026-03-28 05:45:06.116862 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 05:45:06.116879 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 05:45:06.116893 | orchestrator | 2026-03-28 05:45:06.116906 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 05:45:06.116918 | orchestrator | Saturday 28 March 2026 05:44:20 +0000 (0:00:02.129) 0:30:26.840 ******** 2026-03-28 05:45:06.116930 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:45:06.116942 | orchestrator | 2026-03-28 05:45:06.116954 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 05:45:06.117002 | orchestrator | Saturday 28 March 2026 05:44:21 +0000 (0:00:01.493) 0:30:28.333 ******** 2026-03-28 05:45:06.117015 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:45:06.117027 | orchestrator | 2026-03-28 05:45:06.117039 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:45:06.117051 | orchestrator | Saturday 28 March 2026 05:44:22 +0000 (0:00:00.774) 0:30:29.107 ******** 2026-03-28 05:45:06.117062 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:45:06.117074 | orchestrator | 2026-03-28 05:45:06.117086 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:45:06.117098 | orchestrator | Saturday 28 March 2026 05:44:23 +0000 (0:00:00.808) 0:30:29.916 ******** 2026-03-28 05:45:06.117109 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-28 05:45:06.117126 | orchestrator | 2026-03-28 05:45:06.117146 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 05:45:06.117164 | orchestrator | Saturday 28 March 2026 05:44:24 +0000 (0:00:01.174) 0:30:31.091 ******** 2026-03-28 05:45:06.117183 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:45:06.117201 | orchestrator | 2026-03-28 05:45:06.117219 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 05:45:06.117238 | orchestrator | Saturday 28 March 2026 05:44:25 +0000 (0:00:01.209) 0:30:32.301 ******** 2026-03-28 05:45:06.117258 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:45:06.117277 | orchestrator | 2026-03-28 05:45:06.117295 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 05:45:06.117308 | orchestrator | Saturday 28 March 2026 05:44:27 +0000 (0:00:01.147) 0:30:33.449 ******** 2026-03-28 05:45:06.117320 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-28 05:45:06.117334 | orchestrator | 2026-03-28 05:45:06.117346 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 05:45:06.117360 | orchestrator | Saturday 28 March 2026 05:44:28 +0000 (0:00:01.146) 0:30:34.596 ******** 2026-03-28 05:45:06.117373 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:45:06.117385 | orchestrator | 2026-03-28 05:45:06.117398 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 05:45:06.117410 | orchestrator | Saturday 28 March 2026 05:44:30 +0000 (0:00:01.982) 0:30:36.579 ******** 2026-03-28 05:45:06.117423 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:45:06.117436 | orchestrator | 2026-03-28 05:45:06.117507 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 05:45:06.117523 | orchestrator | Saturday 28 March 2026 05:44:32 +0000 (0:00:02.107) 0:30:38.686 ******** 2026-03-28 05:45:06.117535 | orchestrator | ok: [testbed-node-1] 2026-03-28 05:45:06.117548 | orchestrator | 2026-03-28 05:45:06.117561 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 05:45:06.117573 | orchestrator | Saturday 28 March 2026 05:44:34 +0000 (0:00:02.391) 0:30:41.078 ******** 2026-03-28 05:45:06.117586 | orchestrator | changed: [testbed-node-1] 2026-03-28 05:45:06.117598 | orchestrator | 2026-03-28 05:45:06.117611 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:45:06.117624 | orchestrator | Saturday 28 March 2026 05:44:38 +0000 (0:00:03.468) 0:30:44.547 ******** 2026-03-28 05:45:06.117634 | orchestrator | skipping: [testbed-node-1] 2026-03-28 05:45:06.117645 | orchestrator | 2026-03-28 05:45:06.117656 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-28 05:45:06.117667 | orchestrator | 2026-03-28 05:45:06.117677 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-28 05:45:06.117688 | orchestrator | Saturday 28 March 2026 05:44:39 +0000 (0:00:00.991) 0:30:45.538 ******** 2026-03-28 05:45:06.117699 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:45:06.117709 | orchestrator | 2026-03-28 05:45:06.117720 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-28 05:45:06.117731 | orchestrator | Saturday 28 March 2026 05:44:41 +0000 (0:00:02.579) 0:30:48.118 ******** 2026-03-28 05:45:06.117777 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:45:06.117789 | orchestrator | 2026-03-28 05:45:06.117800 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:45:06.117811 | orchestrator | Saturday 28 March 2026 05:44:43 +0000 (0:00:02.152) 0:30:50.270 ******** 2026-03-28 05:45:06.117822 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-28 05:45:06.117833 | orchestrator | 2026-03-28 05:45:06.117844 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:45:06.117854 | orchestrator | Saturday 28 March 2026 05:44:45 +0000 (0:00:01.217) 0:30:51.488 ******** 2026-03-28 05:45:06.117865 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.117876 | orchestrator | 2026-03-28 05:45:06.117887 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:45:06.117898 | orchestrator | Saturday 28 March 2026 05:44:46 +0000 (0:00:01.450) 0:30:52.939 ******** 2026-03-28 05:45:06.117908 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.117919 | orchestrator | 2026-03-28 05:45:06.117930 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:45:06.117940 | orchestrator | Saturday 28 March 2026 05:44:47 +0000 (0:00:01.225) 0:30:54.164 ******** 2026-03-28 05:45:06.117951 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.117962 | orchestrator | 2026-03-28 05:45:06.117973 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:45:06.118001 | orchestrator | Saturday 28 March 2026 05:44:49 +0000 (0:00:01.536) 0:30:55.701 ******** 2026-03-28 05:45:06.118073 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.118088 | orchestrator | 2026-03-28 05:45:06.118099 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:45:06.118110 | orchestrator | Saturday 28 March 2026 05:44:50 +0000 (0:00:01.215) 0:30:56.916 ******** 2026-03-28 05:45:06.118124 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.118144 | orchestrator | 2026-03-28 05:45:06.118162 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:45:06.118180 | orchestrator | Saturday 28 March 2026 05:44:51 +0000 (0:00:01.217) 0:30:58.133 ******** 2026-03-28 05:45:06.118200 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.118212 | orchestrator | 2026-03-28 05:45:06.118222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:45:06.118233 | orchestrator | Saturday 28 March 2026 05:44:52 +0000 (0:00:01.177) 0:30:59.310 ******** 2026-03-28 05:45:06.118244 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:06.118255 | orchestrator | 2026-03-28 05:45:06.118266 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:45:06.118277 | orchestrator | Saturday 28 March 2026 05:44:54 +0000 (0:00:01.174) 0:31:00.485 ******** 2026-03-28 05:45:06.118287 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.118298 | orchestrator | 2026-03-28 05:45:06.118309 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:45:06.118320 | orchestrator | Saturday 28 March 2026 05:44:55 +0000 (0:00:01.139) 0:31:01.624 ******** 2026-03-28 05:45:06.118331 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:45:06.118342 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:45:06.118352 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:45:06.118363 | orchestrator | 2026-03-28 05:45:06.118374 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:45:06.118385 | orchestrator | Saturday 28 March 2026 05:44:56 +0000 (0:00:01.734) 0:31:03.359 ******** 2026-03-28 05:45:06.118396 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:06.118407 | orchestrator | 2026-03-28 05:45:06.118417 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:45:06.118428 | orchestrator | Saturday 28 March 2026 05:44:58 +0000 (0:00:01.306) 0:31:04.666 ******** 2026-03-28 05:45:06.118448 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:45:06.118485 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:45:06.118496 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:45:06.118507 | orchestrator | 2026-03-28 05:45:06.118518 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:45:06.118529 | orchestrator | Saturday 28 March 2026 05:45:01 +0000 (0:00:03.105) 0:31:07.772 ******** 2026-03-28 05:45:06.118540 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:45:06.118558 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:45:06.118569 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:45:06.118580 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:06.118591 | orchestrator | 2026-03-28 05:45:06.118602 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:45:06.118613 | orchestrator | Saturday 28 March 2026 05:45:02 +0000 (0:00:01.548) 0:31:09.320 ******** 2026-03-28 05:45:06.118626 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:45:06.118641 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:45:06.118652 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:45:06.118663 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:06.118674 | orchestrator | 2026-03-28 05:45:06.118685 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:45:06.118695 | orchestrator | Saturday 28 March 2026 05:45:04 +0000 (0:00:02.008) 0:31:11.329 ******** 2026-03-28 05:45:06.118709 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:06.118735 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:26.279257 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:26.279399 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.279431 | orchestrator | 2026-03-28 05:45:26.279581 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:45:26.279611 | orchestrator | Saturday 28 March 2026 05:45:06 +0000 (0:00:01.208) 0:31:12.538 ******** 2026-03-28 05:45:26.279633 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:44:58.778679', 'end': '2026-03-28 05:44:58.827813', 'delta': '0:00:00.049134', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:45:26.279711 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:44:59.360919', 'end': '2026-03-28 05:44:59.418480', 'delta': '0:00:00.057561', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:45:26.279736 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:44:59.966693', 'end': '2026-03-28 05:45:00.026427', 'delta': '0:00:00.059734', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:45:26.279756 | orchestrator | 2026-03-28 05:45:26.279773 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:45:26.279785 | orchestrator | Saturday 28 March 2026 05:45:07 +0000 (0:00:01.260) 0:31:13.798 ******** 2026-03-28 05:45:26.279798 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:26.279813 | orchestrator | 2026-03-28 05:45:26.279826 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:45:26.279838 | orchestrator | Saturday 28 March 2026 05:45:08 +0000 (0:00:01.245) 0:31:15.044 ******** 2026-03-28 05:45:26.279851 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.279865 | orchestrator | 2026-03-28 05:45:26.279878 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:45:26.279890 | orchestrator | Saturday 28 March 2026 05:45:10 +0000 (0:00:01.569) 0:31:16.614 ******** 2026-03-28 05:45:26.279902 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:26.279915 | orchestrator | 2026-03-28 05:45:26.279928 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:45:26.279940 | orchestrator | Saturday 28 March 2026 05:45:11 +0000 (0:00:01.101) 0:31:17.715 ******** 2026-03-28 05:45:26.279952 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:45:26.279965 | orchestrator | 2026-03-28 05:45:26.279978 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:45:26.279990 | orchestrator | Saturday 28 March 2026 05:45:13 +0000 (0:00:01.971) 0:31:19.687 ******** 2026-03-28 05:45:26.280002 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:45:26.280015 | orchestrator | 2026-03-28 05:45:26.280027 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:45:26.280039 | orchestrator | Saturday 28 March 2026 05:45:14 +0000 (0:00:01.190) 0:31:20.878 ******** 2026-03-28 05:45:26.280082 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280095 | orchestrator | 2026-03-28 05:45:26.280108 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:45:26.280121 | orchestrator | Saturday 28 March 2026 05:45:15 +0000 (0:00:01.133) 0:31:22.011 ******** 2026-03-28 05:45:26.280133 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280146 | orchestrator | 2026-03-28 05:45:26.280156 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:45:26.280167 | orchestrator | Saturday 28 March 2026 05:45:16 +0000 (0:00:01.278) 0:31:23.289 ******** 2026-03-28 05:45:26.280178 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280188 | orchestrator | 2026-03-28 05:45:26.280199 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:45:26.280210 | orchestrator | Saturday 28 March 2026 05:45:18 +0000 (0:00:01.188) 0:31:24.478 ******** 2026-03-28 05:45:26.280221 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280232 | orchestrator | 2026-03-28 05:45:26.280243 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:45:26.280253 | orchestrator | Saturday 28 March 2026 05:45:19 +0000 (0:00:01.159) 0:31:25.637 ******** 2026-03-28 05:45:26.280264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280275 | orchestrator | 2026-03-28 05:45:26.280285 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:45:26.280296 | orchestrator | Saturday 28 March 2026 05:45:20 +0000 (0:00:01.152) 0:31:26.790 ******** 2026-03-28 05:45:26.280307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280317 | orchestrator | 2026-03-28 05:45:26.280328 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:45:26.280339 | orchestrator | Saturday 28 March 2026 05:45:21 +0000 (0:00:01.119) 0:31:27.909 ******** 2026-03-28 05:45:26.280349 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280360 | orchestrator | 2026-03-28 05:45:26.280371 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:45:26.280381 | orchestrator | Saturday 28 March 2026 05:45:22 +0000 (0:00:01.158) 0:31:29.068 ******** 2026-03-28 05:45:26.280392 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280403 | orchestrator | 2026-03-28 05:45:26.280414 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:45:26.280425 | orchestrator | Saturday 28 March 2026 05:45:23 +0000 (0:00:01.113) 0:31:30.182 ******** 2026-03-28 05:45:26.280436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:26.280446 | orchestrator | 2026-03-28 05:45:26.280482 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:45:26.280493 | orchestrator | Saturday 28 March 2026 05:45:24 +0000 (0:00:01.137) 0:31:31.319 ******** 2026-03-28 05:45:26.280511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:26.280524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:26.280536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:26.280555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:45:26.280569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:26.280588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:27.517909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:27.518102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:45:27.518150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:27.518164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:45:27.518176 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:45:27.518189 | orchestrator | 2026-03-28 05:45:27.518201 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:45:27.518213 | orchestrator | Saturday 28 March 2026 05:45:26 +0000 (0:00:01.376) 0:31:32.696 ******** 2026-03-28 05:45:27.518246 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518260 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518271 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518290 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518311 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518334 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:45:27.518361 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e4bb62b9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4bb62b9-2528-4afd-b7c7-20e80296c6f7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:46:03.475649 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:46:03.475781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:46:03.475803 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.475823 | orchestrator | 2026-03-28 05:46:03.475841 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:46:03.475860 | orchestrator | Saturday 28 March 2026 05:45:27 +0000 (0:00:01.243) 0:31:33.939 ******** 2026-03-28 05:46:03.475877 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.475895 | orchestrator | 2026-03-28 05:46:03.475912 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:46:03.475930 | orchestrator | Saturday 28 March 2026 05:45:29 +0000 (0:00:01.590) 0:31:35.530 ******** 2026-03-28 05:46:03.475947 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.475964 | orchestrator | 2026-03-28 05:46:03.475982 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:46:03.475993 | orchestrator | Saturday 28 March 2026 05:45:30 +0000 (0:00:01.127) 0:31:36.658 ******** 2026-03-28 05:46:03.476003 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.476013 | orchestrator | 2026-03-28 05:46:03.476022 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:46:03.476033 | orchestrator | Saturday 28 March 2026 05:45:31 +0000 (0:00:01.485) 0:31:38.144 ******** 2026-03-28 05:46:03.476044 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476056 | orchestrator | 2026-03-28 05:46:03.476067 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:46:03.476078 | orchestrator | Saturday 28 March 2026 05:45:32 +0000 (0:00:01.162) 0:31:39.307 ******** 2026-03-28 05:46:03.476089 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476100 | orchestrator | 2026-03-28 05:46:03.476111 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:46:03.476123 | orchestrator | Saturday 28 March 2026 05:45:34 +0000 (0:00:01.295) 0:31:40.603 ******** 2026-03-28 05:46:03.476134 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476145 | orchestrator | 2026-03-28 05:46:03.476156 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:46:03.476167 | orchestrator | Saturday 28 March 2026 05:45:35 +0000 (0:00:01.259) 0:31:41.862 ******** 2026-03-28 05:46:03.476178 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 05:46:03.476190 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 05:46:03.476201 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:46:03.476212 | orchestrator | 2026-03-28 05:46:03.476223 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:46:03.476261 | orchestrator | Saturday 28 March 2026 05:45:37 +0000 (0:00:02.097) 0:31:43.959 ******** 2026-03-28 05:46:03.476273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 05:46:03.476284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 05:46:03.476295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 05:46:03.476306 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476317 | orchestrator | 2026-03-28 05:46:03.476328 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:46:03.476339 | orchestrator | Saturday 28 March 2026 05:45:38 +0000 (0:00:01.222) 0:31:45.182 ******** 2026-03-28 05:46:03.476350 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476361 | orchestrator | 2026-03-28 05:46:03.476372 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:46:03.476398 | orchestrator | Saturday 28 March 2026 05:45:39 +0000 (0:00:01.202) 0:31:46.385 ******** 2026-03-28 05:46:03.476410 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:46:03.476422 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:46:03.476433 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:46:03.476472 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:46:03.476485 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:46:03.476497 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:46:03.476527 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:46:03.476539 | orchestrator | 2026-03-28 05:46:03.476550 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:46:03.476562 | orchestrator | Saturday 28 March 2026 05:45:42 +0000 (0:00:02.235) 0:31:48.621 ******** 2026-03-28 05:46:03.476572 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:46:03.476583 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:46:03.476594 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:46:03.476605 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:46:03.476616 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:46:03.476627 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:46:03.476639 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:46:03.476650 | orchestrator | 2026-03-28 05:46:03.476661 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:46:03.476672 | orchestrator | Saturday 28 March 2026 05:45:44 +0000 (0:00:02.471) 0:31:51.093 ******** 2026-03-28 05:46:03.476682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-28 05:46:03.476695 | orchestrator | 2026-03-28 05:46:03.476706 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:46:03.476717 | orchestrator | Saturday 28 March 2026 05:45:45 +0000 (0:00:01.311) 0:31:52.404 ******** 2026-03-28 05:46:03.476728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-28 05:46:03.476739 | orchestrator | 2026-03-28 05:46:03.476750 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:46:03.476761 | orchestrator | Saturday 28 March 2026 05:45:47 +0000 (0:00:01.180) 0:31:53.584 ******** 2026-03-28 05:46:03.476772 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.476782 | orchestrator | 2026-03-28 05:46:03.476793 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:46:03.476813 | orchestrator | Saturday 28 March 2026 05:45:48 +0000 (0:00:01.557) 0:31:55.141 ******** 2026-03-28 05:46:03.476824 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476835 | orchestrator | 2026-03-28 05:46:03.476852 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:46:03.476871 | orchestrator | Saturday 28 March 2026 05:45:49 +0000 (0:00:01.213) 0:31:56.355 ******** 2026-03-28 05:46:03.476890 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476909 | orchestrator | 2026-03-28 05:46:03.476928 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:46:03.476946 | orchestrator | Saturday 28 March 2026 05:45:51 +0000 (0:00:01.247) 0:31:57.603 ******** 2026-03-28 05:46:03.476966 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.476984 | orchestrator | 2026-03-28 05:46:03.477002 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:46:03.477014 | orchestrator | Saturday 28 March 2026 05:45:52 +0000 (0:00:01.180) 0:31:58.783 ******** 2026-03-28 05:46:03.477024 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.477036 | orchestrator | 2026-03-28 05:46:03.477047 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:46:03.477058 | orchestrator | Saturday 28 March 2026 05:45:53 +0000 (0:00:01.620) 0:32:00.404 ******** 2026-03-28 05:46:03.477069 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.477080 | orchestrator | 2026-03-28 05:46:03.477091 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:46:03.477102 | orchestrator | Saturday 28 March 2026 05:45:55 +0000 (0:00:01.153) 0:32:01.558 ******** 2026-03-28 05:46:03.477113 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.477124 | orchestrator | 2026-03-28 05:46:03.477135 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:46:03.477145 | orchestrator | Saturday 28 March 2026 05:45:56 +0000 (0:00:01.160) 0:32:02.718 ******** 2026-03-28 05:46:03.477156 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.477167 | orchestrator | 2026-03-28 05:46:03.477178 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:46:03.477189 | orchestrator | Saturday 28 March 2026 05:45:57 +0000 (0:00:01.561) 0:32:04.280 ******** 2026-03-28 05:46:03.477200 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.477211 | orchestrator | 2026-03-28 05:46:03.477222 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:46:03.477233 | orchestrator | Saturday 28 March 2026 05:45:59 +0000 (0:00:01.607) 0:32:05.888 ******** 2026-03-28 05:46:03.477244 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.477255 | orchestrator | 2026-03-28 05:46:03.477270 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:46:03.477288 | orchestrator | Saturday 28 March 2026 05:46:00 +0000 (0:00:00.804) 0:32:06.693 ******** 2026-03-28 05:46:03.477314 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:03.477334 | orchestrator | 2026-03-28 05:46:03.477353 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:46:03.477365 | orchestrator | Saturday 28 March 2026 05:46:01 +0000 (0:00:00.840) 0:32:07.534 ******** 2026-03-28 05:46:03.477376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.477387 | orchestrator | 2026-03-28 05:46:03.477398 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:46:03.477409 | orchestrator | Saturday 28 March 2026 05:46:01 +0000 (0:00:00.786) 0:32:08.321 ******** 2026-03-28 05:46:03.477420 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:03.477431 | orchestrator | 2026-03-28 05:46:03.477442 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:46:03.477478 | orchestrator | Saturday 28 March 2026 05:46:02 +0000 (0:00:00.786) 0:32:09.108 ******** 2026-03-28 05:46:03.477498 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.385425 | orchestrator | 2026-03-28 05:46:45.385605 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:46:45.385659 | orchestrator | Saturday 28 March 2026 05:46:03 +0000 (0:00:00.792) 0:32:09.900 ******** 2026-03-28 05:46:45.385672 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.385684 | orchestrator | 2026-03-28 05:46:45.385696 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:46:45.385707 | orchestrator | Saturday 28 March 2026 05:46:04 +0000 (0:00:00.760) 0:32:10.661 ******** 2026-03-28 05:46:45.385718 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.385729 | orchestrator | 2026-03-28 05:46:45.385740 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:46:45.385752 | orchestrator | Saturday 28 March 2026 05:46:05 +0000 (0:00:00.786) 0:32:11.447 ******** 2026-03-28 05:46:45.385763 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.385775 | orchestrator | 2026-03-28 05:46:45.385786 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:46:45.385797 | orchestrator | Saturday 28 March 2026 05:46:05 +0000 (0:00:00.801) 0:32:12.249 ******** 2026-03-28 05:46:45.385808 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.385819 | orchestrator | 2026-03-28 05:46:45.385830 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:46:45.385841 | orchestrator | Saturday 28 March 2026 05:46:06 +0000 (0:00:00.789) 0:32:13.039 ******** 2026-03-28 05:46:45.385852 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.385863 | orchestrator | 2026-03-28 05:46:45.385874 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:46:45.385885 | orchestrator | Saturday 28 March 2026 05:46:07 +0000 (0:00:00.837) 0:32:13.876 ******** 2026-03-28 05:46:45.385896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.385907 | orchestrator | 2026-03-28 05:46:45.385918 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:46:45.385929 | orchestrator | Saturday 28 March 2026 05:46:08 +0000 (0:00:00.786) 0:32:14.663 ******** 2026-03-28 05:46:45.385942 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.385954 | orchestrator | 2026-03-28 05:46:45.385967 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:46:45.385980 | orchestrator | Saturday 28 March 2026 05:46:09 +0000 (0:00:00.792) 0:32:15.455 ******** 2026-03-28 05:46:45.385992 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386005 | orchestrator | 2026-03-28 05:46:45.386128 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:46:45.386151 | orchestrator | Saturday 28 March 2026 05:46:09 +0000 (0:00:00.870) 0:32:16.326 ******** 2026-03-28 05:46:45.386183 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386201 | orchestrator | 2026-03-28 05:46:45.386219 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:46:45.386237 | orchestrator | Saturday 28 March 2026 05:46:10 +0000 (0:00:00.792) 0:32:17.119 ******** 2026-03-28 05:46:45.386255 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386273 | orchestrator | 2026-03-28 05:46:45.386293 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:46:45.386311 | orchestrator | Saturday 28 March 2026 05:46:11 +0000 (0:00:00.796) 0:32:17.915 ******** 2026-03-28 05:46:45.386330 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386347 | orchestrator | 2026-03-28 05:46:45.386358 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:46:45.386369 | orchestrator | Saturday 28 March 2026 05:46:12 +0000 (0:00:00.825) 0:32:18.741 ******** 2026-03-28 05:46:45.386380 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386391 | orchestrator | 2026-03-28 05:46:45.386403 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:46:45.386423 | orchestrator | Saturday 28 March 2026 05:46:13 +0000 (0:00:00.771) 0:32:19.512 ******** 2026-03-28 05:46:45.386464 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386475 | orchestrator | 2026-03-28 05:46:45.386486 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:46:45.386509 | orchestrator | Saturday 28 March 2026 05:46:13 +0000 (0:00:00.796) 0:32:20.309 ******** 2026-03-28 05:46:45.386520 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386531 | orchestrator | 2026-03-28 05:46:45.386542 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:46:45.386553 | orchestrator | Saturday 28 March 2026 05:46:14 +0000 (0:00:00.836) 0:32:21.146 ******** 2026-03-28 05:46:45.386564 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386575 | orchestrator | 2026-03-28 05:46:45.386585 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:46:45.386596 | orchestrator | Saturday 28 March 2026 05:46:15 +0000 (0:00:00.830) 0:32:21.976 ******** 2026-03-28 05:46:45.386607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386618 | orchestrator | 2026-03-28 05:46:45.386629 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:46:45.386639 | orchestrator | Saturday 28 March 2026 05:46:16 +0000 (0:00:00.769) 0:32:22.746 ******** 2026-03-28 05:46:45.386650 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386661 | orchestrator | 2026-03-28 05:46:45.386686 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:46:45.386697 | orchestrator | Saturday 28 March 2026 05:46:17 +0000 (0:00:00.766) 0:32:23.513 ******** 2026-03-28 05:46:45.386708 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.386719 | orchestrator | 2026-03-28 05:46:45.386730 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:46:45.386740 | orchestrator | Saturday 28 March 2026 05:46:18 +0000 (0:00:01.580) 0:32:25.093 ******** 2026-03-28 05:46:45.386751 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.386762 | orchestrator | 2026-03-28 05:46:45.386773 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:46:45.386784 | orchestrator | Saturday 28 March 2026 05:46:20 +0000 (0:00:02.107) 0:32:27.201 ******** 2026-03-28 05:46:45.386795 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-28 05:46:45.386807 | orchestrator | 2026-03-28 05:46:45.386839 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:46:45.386851 | orchestrator | Saturday 28 March 2026 05:46:22 +0000 (0:00:01.308) 0:32:28.509 ******** 2026-03-28 05:46:45.386862 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386873 | orchestrator | 2026-03-28 05:46:45.386883 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:46:45.386894 | orchestrator | Saturday 28 March 2026 05:46:23 +0000 (0:00:01.110) 0:32:29.620 ******** 2026-03-28 05:46:45.386905 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.386916 | orchestrator | 2026-03-28 05:46:45.386926 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:46:45.386937 | orchestrator | Saturday 28 March 2026 05:46:24 +0000 (0:00:01.170) 0:32:30.790 ******** 2026-03-28 05:46:45.386948 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:46:45.386958 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:46:45.386969 | orchestrator | 2026-03-28 05:46:45.386980 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:46:45.386991 | orchestrator | Saturday 28 March 2026 05:46:26 +0000 (0:00:01.866) 0:32:32.657 ******** 2026-03-28 05:46:45.387001 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.387012 | orchestrator | 2026-03-28 05:46:45.387023 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:46:45.387034 | orchestrator | Saturday 28 March 2026 05:46:27 +0000 (0:00:01.471) 0:32:34.128 ******** 2026-03-28 05:46:45.387044 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387055 | orchestrator | 2026-03-28 05:46:45.387066 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:46:45.387084 | orchestrator | Saturday 28 March 2026 05:46:28 +0000 (0:00:01.160) 0:32:35.289 ******** 2026-03-28 05:46:45.387095 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387106 | orchestrator | 2026-03-28 05:46:45.387116 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:46:45.387127 | orchestrator | Saturday 28 March 2026 05:46:29 +0000 (0:00:00.813) 0:32:36.102 ******** 2026-03-28 05:46:45.387138 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387148 | orchestrator | 2026-03-28 05:46:45.387159 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:46:45.387170 | orchestrator | Saturday 28 March 2026 05:46:30 +0000 (0:00:00.809) 0:32:36.911 ******** 2026-03-28 05:46:45.387180 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-28 05:46:45.387191 | orchestrator | 2026-03-28 05:46:45.387202 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:46:45.387213 | orchestrator | Saturday 28 March 2026 05:46:31 +0000 (0:00:01.123) 0:32:38.035 ******** 2026-03-28 05:46:45.387224 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.387234 | orchestrator | 2026-03-28 05:46:45.387245 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:46:45.387256 | orchestrator | Saturday 28 March 2026 05:46:34 +0000 (0:00:02.714) 0:32:40.750 ******** 2026-03-28 05:46:45.387267 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:46:45.387278 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:46:45.387288 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:46:45.387299 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387310 | orchestrator | 2026-03-28 05:46:45.387320 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:46:45.387331 | orchestrator | Saturday 28 March 2026 05:46:35 +0000 (0:00:01.200) 0:32:41.950 ******** 2026-03-28 05:46:45.387342 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387353 | orchestrator | 2026-03-28 05:46:45.387363 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:46:45.387374 | orchestrator | Saturday 28 March 2026 05:46:36 +0000 (0:00:01.183) 0:32:43.134 ******** 2026-03-28 05:46:45.387385 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387395 | orchestrator | 2026-03-28 05:46:45.387406 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:46:45.387417 | orchestrator | Saturday 28 March 2026 05:46:37 +0000 (0:00:01.187) 0:32:44.321 ******** 2026-03-28 05:46:45.387428 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387455 | orchestrator | 2026-03-28 05:46:45.387466 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:46:45.387477 | orchestrator | Saturday 28 March 2026 05:46:39 +0000 (0:00:01.188) 0:32:45.510 ******** 2026-03-28 05:46:45.387488 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387498 | orchestrator | 2026-03-28 05:46:45.387509 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:46:45.387520 | orchestrator | Saturday 28 March 2026 05:46:40 +0000 (0:00:01.144) 0:32:46.654 ******** 2026-03-28 05:46:45.387530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:46:45.387541 | orchestrator | 2026-03-28 05:46:45.387557 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:46:45.387568 | orchestrator | Saturday 28 March 2026 05:46:41 +0000 (0:00:00.851) 0:32:47.506 ******** 2026-03-28 05:46:45.387579 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.387590 | orchestrator | 2026-03-28 05:46:45.387601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:46:45.387611 | orchestrator | Saturday 28 March 2026 05:46:43 +0000 (0:00:02.312) 0:32:49.818 ******** 2026-03-28 05:46:45.387622 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:46:45.387633 | orchestrator | 2026-03-28 05:46:45.387650 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:46:45.387661 | orchestrator | Saturday 28 March 2026 05:46:44 +0000 (0:00:00.809) 0:32:50.628 ******** 2026-03-28 05:46:45.387672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-28 05:46:45.387683 | orchestrator | 2026-03-28 05:46:45.387700 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:47:22.980882 | orchestrator | Saturday 28 March 2026 05:46:45 +0000 (0:00:01.179) 0:32:51.808 ******** 2026-03-28 05:47:22.981027 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981043 | orchestrator | 2026-03-28 05:47:22.981056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:47:22.981081 | orchestrator | Saturday 28 March 2026 05:46:46 +0000 (0:00:01.130) 0:32:52.939 ******** 2026-03-28 05:47:22.981092 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981113 | orchestrator | 2026-03-28 05:47:22.981123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:47:22.981134 | orchestrator | Saturday 28 March 2026 05:46:47 +0000 (0:00:01.132) 0:32:54.072 ******** 2026-03-28 05:47:22.981144 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981154 | orchestrator | 2026-03-28 05:47:22.981164 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:47:22.981174 | orchestrator | Saturday 28 March 2026 05:46:48 +0000 (0:00:01.138) 0:32:55.210 ******** 2026-03-28 05:47:22.981184 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981194 | orchestrator | 2026-03-28 05:47:22.981203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:47:22.981213 | orchestrator | Saturday 28 March 2026 05:46:49 +0000 (0:00:01.111) 0:32:56.322 ******** 2026-03-28 05:47:22.981223 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981233 | orchestrator | 2026-03-28 05:47:22.981243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:47:22.981253 | orchestrator | Saturday 28 March 2026 05:46:51 +0000 (0:00:01.187) 0:32:57.509 ******** 2026-03-28 05:47:22.981263 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981273 | orchestrator | 2026-03-28 05:47:22.981283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:47:22.981293 | orchestrator | Saturday 28 March 2026 05:46:52 +0000 (0:00:01.326) 0:32:58.836 ******** 2026-03-28 05:47:22.981303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981312 | orchestrator | 2026-03-28 05:47:22.981322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:47:22.981332 | orchestrator | Saturday 28 March 2026 05:46:53 +0000 (0:00:01.210) 0:33:00.046 ******** 2026-03-28 05:47:22.981343 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981353 | orchestrator | 2026-03-28 05:47:22.981363 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:47:22.981373 | orchestrator | Saturday 28 March 2026 05:46:54 +0000 (0:00:01.139) 0:33:01.186 ******** 2026-03-28 05:47:22.981382 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:47:22.981396 | orchestrator | 2026-03-28 05:47:22.981407 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:47:22.981418 | orchestrator | Saturday 28 March 2026 05:46:55 +0000 (0:00:00.882) 0:33:02.069 ******** 2026-03-28 05:47:22.981429 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-28 05:47:22.981472 | orchestrator | 2026-03-28 05:47:22.981489 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:47:22.981501 | orchestrator | Saturday 28 March 2026 05:46:56 +0000 (0:00:01.177) 0:33:03.247 ******** 2026-03-28 05:47:22.981512 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 05:47:22.981524 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 05:47:22.981535 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 05:47:22.981546 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 05:47:22.981585 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 05:47:22.981596 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 05:47:22.981607 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 05:47:22.981619 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:47:22.981630 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:47:22.981641 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:47:22.981652 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:47:22.981663 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:47:22.981674 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:47:22.981686 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:47:22.981698 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 05:47:22.981708 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 05:47:22.981719 | orchestrator | 2026-03-28 05:47:22.981730 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:47:22.981742 | orchestrator | Saturday 28 March 2026 05:47:03 +0000 (0:00:06.544) 0:33:09.791 ******** 2026-03-28 05:47:22.981753 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981766 | orchestrator | 2026-03-28 05:47:22.981793 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:47:22.981804 | orchestrator | Saturday 28 March 2026 05:47:04 +0000 (0:00:00.820) 0:33:10.611 ******** 2026-03-28 05:47:22.981813 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981823 | orchestrator | 2026-03-28 05:47:22.981833 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:47:22.981843 | orchestrator | Saturday 28 March 2026 05:47:04 +0000 (0:00:00.798) 0:33:11.410 ******** 2026-03-28 05:47:22.981852 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981862 | orchestrator | 2026-03-28 05:47:22.981872 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:47:22.981881 | orchestrator | Saturday 28 March 2026 05:47:05 +0000 (0:00:00.783) 0:33:12.193 ******** 2026-03-28 05:47:22.981891 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981900 | orchestrator | 2026-03-28 05:47:22.981910 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:47:22.981939 | orchestrator | Saturday 28 March 2026 05:47:06 +0000 (0:00:00.793) 0:33:12.987 ******** 2026-03-28 05:47:22.981949 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981959 | orchestrator | 2026-03-28 05:47:22.981968 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:47:22.981978 | orchestrator | Saturday 28 March 2026 05:47:07 +0000 (0:00:00.822) 0:33:13.809 ******** 2026-03-28 05:47:22.981988 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.981997 | orchestrator | 2026-03-28 05:47:22.982007 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:47:22.982076 | orchestrator | Saturday 28 March 2026 05:47:08 +0000 (0:00:00.846) 0:33:14.656 ******** 2026-03-28 05:47:22.982088 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982097 | orchestrator | 2026-03-28 05:47:22.982107 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:47:22.982117 | orchestrator | Saturday 28 March 2026 05:47:08 +0000 (0:00:00.773) 0:33:15.430 ******** 2026-03-28 05:47:22.982126 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982136 | orchestrator | 2026-03-28 05:47:22.982158 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:47:22.982168 | orchestrator | Saturday 28 March 2026 05:47:09 +0000 (0:00:00.802) 0:33:16.232 ******** 2026-03-28 05:47:22.982177 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982196 | orchestrator | 2026-03-28 05:47:22.982206 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:47:22.982216 | orchestrator | Saturday 28 March 2026 05:47:10 +0000 (0:00:00.835) 0:33:17.068 ******** 2026-03-28 05:47:22.982226 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982235 | orchestrator | 2026-03-28 05:47:22.982245 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:47:22.982255 | orchestrator | Saturday 28 March 2026 05:47:11 +0000 (0:00:00.838) 0:33:17.907 ******** 2026-03-28 05:47:22.982264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982274 | orchestrator | 2026-03-28 05:47:22.982284 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:47:22.982293 | orchestrator | Saturday 28 March 2026 05:47:12 +0000 (0:00:00.839) 0:33:18.746 ******** 2026-03-28 05:47:22.982303 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982312 | orchestrator | 2026-03-28 05:47:22.982322 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:47:22.982332 | orchestrator | Saturday 28 March 2026 05:47:13 +0000 (0:00:00.806) 0:33:19.553 ******** 2026-03-28 05:47:22.982342 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982351 | orchestrator | 2026-03-28 05:47:22.982361 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:47:22.982371 | orchestrator | Saturday 28 March 2026 05:47:14 +0000 (0:00:00.911) 0:33:20.465 ******** 2026-03-28 05:47:22.982380 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982390 | orchestrator | 2026-03-28 05:47:22.982400 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:47:22.982409 | orchestrator | Saturday 28 March 2026 05:47:14 +0000 (0:00:00.811) 0:33:21.276 ******** 2026-03-28 05:47:22.982419 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982429 | orchestrator | 2026-03-28 05:47:22.982481 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:47:22.982492 | orchestrator | Saturday 28 March 2026 05:47:15 +0000 (0:00:00.947) 0:33:22.224 ******** 2026-03-28 05:47:22.982502 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982511 | orchestrator | 2026-03-28 05:47:22.982521 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:47:22.982531 | orchestrator | Saturday 28 March 2026 05:47:16 +0000 (0:00:00.810) 0:33:23.034 ******** 2026-03-28 05:47:22.982540 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982550 | orchestrator | 2026-03-28 05:47:22.982560 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:47:22.982571 | orchestrator | Saturday 28 March 2026 05:47:17 +0000 (0:00:00.789) 0:33:23.824 ******** 2026-03-28 05:47:22.982581 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982591 | orchestrator | 2026-03-28 05:47:22.982600 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:47:22.982610 | orchestrator | Saturday 28 March 2026 05:47:18 +0000 (0:00:00.888) 0:33:24.713 ******** 2026-03-28 05:47:22.982619 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982629 | orchestrator | 2026-03-28 05:47:22.982638 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:47:22.982648 | orchestrator | Saturday 28 March 2026 05:47:19 +0000 (0:00:00.893) 0:33:25.607 ******** 2026-03-28 05:47:22.982657 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982667 | orchestrator | 2026-03-28 05:47:22.982676 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:47:22.982686 | orchestrator | Saturday 28 March 2026 05:47:19 +0000 (0:00:00.826) 0:33:26.434 ******** 2026-03-28 05:47:22.982701 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982711 | orchestrator | 2026-03-28 05:47:22.982720 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:47:22.982730 | orchestrator | Saturday 28 March 2026 05:47:20 +0000 (0:00:00.827) 0:33:27.262 ******** 2026-03-28 05:47:22.982747 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:47:22.982757 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:47:22.982767 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:47:22.982776 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:47:22.982786 | orchestrator | 2026-03-28 05:47:22.982796 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:47:22.982805 | orchestrator | Saturday 28 March 2026 05:47:21 +0000 (0:00:01.071) 0:33:28.333 ******** 2026-03-28 05:47:22.982815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:47:22.982832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:48:20.955422 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:48:20.955582 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.955600 | orchestrator | 2026-03-28 05:48:20.955614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:48:20.955628 | orchestrator | Saturday 28 March 2026 05:47:22 +0000 (0:00:01.069) 0:33:29.402 ******** 2026-03-28 05:48:20.955639 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 05:48:20.955651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 05:48:20.955663 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 05:48:20.955674 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.955685 | orchestrator | 2026-03-28 05:48:20.955697 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:48:20.955709 | orchestrator | Saturday 28 March 2026 05:47:24 +0000 (0:00:01.058) 0:33:30.461 ******** 2026-03-28 05:48:20.955720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.955731 | orchestrator | 2026-03-28 05:48:20.955743 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:48:20.955754 | orchestrator | Saturday 28 March 2026 05:47:24 +0000 (0:00:00.818) 0:33:31.280 ******** 2026-03-28 05:48:20.955766 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 05:48:20.955777 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.955789 | orchestrator | 2026-03-28 05:48:20.955800 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:48:20.955811 | orchestrator | Saturday 28 March 2026 05:47:25 +0000 (0:00:00.916) 0:33:32.196 ******** 2026-03-28 05:48:20.955823 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.955834 | orchestrator | 2026-03-28 05:48:20.955846 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:48:20.955857 | orchestrator | Saturday 28 March 2026 05:47:27 +0000 (0:00:01.475) 0:33:33.672 ******** 2026-03-28 05:48:20.955868 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:48:20.955880 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:48:20.955892 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 05:48:20.955903 | orchestrator | 2026-03-28 05:48:20.955914 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 05:48:20.955926 | orchestrator | Saturday 28 March 2026 05:47:28 +0000 (0:00:01.749) 0:33:35.422 ******** 2026-03-28 05:48:20.955937 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-28 05:48:20.955950 | orchestrator | 2026-03-28 05:48:20.955962 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 05:48:20.955975 | orchestrator | Saturday 28 March 2026 05:47:30 +0000 (0:00:01.308) 0:33:36.730 ******** 2026-03-28 05:48:20.955989 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.956002 | orchestrator | 2026-03-28 05:48:20.956014 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 05:48:20.956027 | orchestrator | Saturday 28 March 2026 05:47:31 +0000 (0:00:01.483) 0:33:38.213 ******** 2026-03-28 05:48:20.956064 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.956077 | orchestrator | 2026-03-28 05:48:20.956090 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 05:48:20.956102 | orchestrator | Saturday 28 March 2026 05:47:32 +0000 (0:00:01.159) 0:33:39.373 ******** 2026-03-28 05:48:20.956115 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:48:20.956130 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:48:20.956149 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:48:20.956170 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-28 05:48:20.956191 | orchestrator | 2026-03-28 05:48:20.956211 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 05:48:20.956233 | orchestrator | Saturday 28 March 2026 05:47:40 +0000 (0:00:07.426) 0:33:46.800 ******** 2026-03-28 05:48:20.956252 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.956266 | orchestrator | 2026-03-28 05:48:20.956279 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 05:48:20.956299 | orchestrator | Saturday 28 March 2026 05:47:41 +0000 (0:00:01.184) 0:33:47.985 ******** 2026-03-28 05:48:20.956318 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 05:48:20.956336 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 05:48:20.956354 | orchestrator | 2026-03-28 05:48:20.956372 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 05:48:20.956391 | orchestrator | Saturday 28 March 2026 05:47:44 +0000 (0:00:03.390) 0:33:51.375 ******** 2026-03-28 05:48:20.956410 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 05:48:20.956453 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 05:48:20.956474 | orchestrator | 2026-03-28 05:48:20.956512 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 05:48:20.956525 | orchestrator | Saturday 28 March 2026 05:47:46 +0000 (0:00:02.001) 0:33:53.377 ******** 2026-03-28 05:48:20.956536 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.956547 | orchestrator | 2026-03-28 05:48:20.956558 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 05:48:20.956569 | orchestrator | Saturday 28 March 2026 05:47:48 +0000 (0:00:01.628) 0:33:55.005 ******** 2026-03-28 05:48:20.956580 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.956591 | orchestrator | 2026-03-28 05:48:20.956602 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 05:48:20.956612 | orchestrator | Saturday 28 March 2026 05:47:49 +0000 (0:00:00.762) 0:33:55.768 ******** 2026-03-28 05:48:20.956623 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.956634 | orchestrator | 2026-03-28 05:48:20.956645 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 05:48:20.956676 | orchestrator | Saturday 28 March 2026 05:47:50 +0000 (0:00:00.786) 0:33:56.554 ******** 2026-03-28 05:48:20.956689 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-28 05:48:20.956700 | orchestrator | 2026-03-28 05:48:20.956711 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 05:48:20.956722 | orchestrator | Saturday 28 March 2026 05:47:51 +0000 (0:00:01.121) 0:33:57.676 ******** 2026-03-28 05:48:20.956733 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.956744 | orchestrator | 2026-03-28 05:48:20.956755 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 05:48:20.956768 | orchestrator | Saturday 28 March 2026 05:47:52 +0000 (0:00:01.129) 0:33:58.805 ******** 2026-03-28 05:48:20.956788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.956807 | orchestrator | 2026-03-28 05:48:20.956826 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 05:48:20.956845 | orchestrator | Saturday 28 March 2026 05:47:53 +0000 (0:00:01.193) 0:33:59.999 ******** 2026-03-28 05:48:20.956869 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-28 05:48:20.956880 | orchestrator | 2026-03-28 05:48:20.956890 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 05:48:20.956901 | orchestrator | Saturday 28 March 2026 05:47:54 +0000 (0:00:01.112) 0:34:01.112 ******** 2026-03-28 05:48:20.956912 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.956923 | orchestrator | 2026-03-28 05:48:20.956934 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 05:48:20.956945 | orchestrator | Saturday 28 March 2026 05:47:56 +0000 (0:00:02.067) 0:34:03.179 ******** 2026-03-28 05:48:20.956956 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.956967 | orchestrator | 2026-03-28 05:48:20.956977 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 05:48:20.956988 | orchestrator | Saturday 28 March 2026 05:47:58 +0000 (0:00:01.961) 0:34:05.140 ******** 2026-03-28 05:48:20.956999 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.957010 | orchestrator | 2026-03-28 05:48:20.957021 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 05:48:20.957032 | orchestrator | Saturday 28 March 2026 05:48:01 +0000 (0:00:02.443) 0:34:07.584 ******** 2026-03-28 05:48:20.957043 | orchestrator | changed: [testbed-node-2] 2026-03-28 05:48:20.957053 | orchestrator | 2026-03-28 05:48:20.957064 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 05:48:20.957075 | orchestrator | Saturday 28 March 2026 05:48:04 +0000 (0:00:03.729) 0:34:11.314 ******** 2026-03-28 05:48:20.957086 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-28 05:48:20.957097 | orchestrator | 2026-03-28 05:48:20.957108 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-28 05:48:20.957119 | orchestrator | Saturday 28 March 2026 05:48:06 +0000 (0:00:01.523) 0:34:12.837 ******** 2026-03-28 05:48:20.957129 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:48:20.957140 | orchestrator | 2026-03-28 05:48:20.957151 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-28 05:48:20.957162 | orchestrator | Saturday 28 March 2026 05:48:08 +0000 (0:00:02.402) 0:34:15.240 ******** 2026-03-28 05:48:20.957173 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:48:20.957183 | orchestrator | 2026-03-28 05:48:20.957194 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-28 05:48:20.957205 | orchestrator | Saturday 28 March 2026 05:48:11 +0000 (0:00:02.392) 0:34:17.632 ******** 2026-03-28 05:48:20.957216 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.957227 | orchestrator | 2026-03-28 05:48:20.957237 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-28 05:48:20.957248 | orchestrator | Saturday 28 March 2026 05:48:12 +0000 (0:00:01.336) 0:34:18.969 ******** 2026-03-28 05:48:20.957259 | orchestrator | ok: [testbed-node-2] 2026-03-28 05:48:20.957270 | orchestrator | 2026-03-28 05:48:20.957281 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-28 05:48:20.957291 | orchestrator | Saturday 28 March 2026 05:48:13 +0000 (0:00:01.159) 0:34:20.128 ******** 2026-03-28 05:48:20.957302 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-28 05:48:20.957313 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-28 05:48:20.957324 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.957335 | orchestrator | 2026-03-28 05:48:20.957346 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-28 05:48:20.957357 | orchestrator | Saturday 28 March 2026 05:48:15 +0000 (0:00:01.865) 0:34:21.994 ******** 2026-03-28 05:48:20.957368 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-28 05:48:20.957379 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-28 05:48:20.957390 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-28 05:48:20.957407 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-28 05:48:20.957455 | orchestrator | skipping: [testbed-node-2] 2026-03-28 05:48:20.957467 | orchestrator | 2026-03-28 05:48:20.957478 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-28 05:48:20.957489 | orchestrator | 2026-03-28 05:48:20.957500 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:48:20.957511 | orchestrator | Saturday 28 March 2026 05:48:17 +0000 (0:00:02.057) 0:34:24.052 ******** 2026-03-28 05:48:20.957522 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:48:20.957533 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:48:20.957544 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:48:20.957555 | orchestrator | 2026-03-28 05:48:20.957566 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:48:20.957577 | orchestrator | Saturday 28 March 2026 05:48:19 +0000 (0:00:01.702) 0:34:25.754 ******** 2026-03-28 05:48:20.957588 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:48:20.957599 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:48:20.957609 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:48:20.957620 | orchestrator | 2026-03-28 05:48:20.957639 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-28 05:48:27.579567 | orchestrator | Saturday 28 March 2026 05:48:20 +0000 (0:00:01.615) 0:34:27.370 ******** 2026-03-28 05:48:27.579689 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:48:27.579707 | orchestrator | 2026-03-28 05:48:27.579721 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-28 05:48:27.579732 | orchestrator | Saturday 28 March 2026 05:48:23 +0000 (0:00:02.990) 0:34:30.360 ******** 2026-03-28 05:48:27.579745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:48:27.579757 | orchestrator | 2026-03-28 05:48:27.579768 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-28 05:48:27.579779 | orchestrator | Saturday 28 March 2026 05:48:26 +0000 (0:00:02.989) 0:34:33.350 ******** 2026-03-28 05:48:27.579797 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-28T03:05:52.445004+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:27.579879 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-28T03:07:07.346051+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:27.579896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-28T03:07:11.267331+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:27.579932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-28T03:08:13.255768+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:28.379157 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-28T03:08:19.375196+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:28.379306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-28T03:08:25.604529+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:28.379347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-28T03:08:31.923244+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '187', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:28.379376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-28T03:08:38.170885+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:28.379400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-28T03:08:50.028350+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '126', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '120', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:29.914099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-28T03:09:35.876806+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '109', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 109, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:29.914197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-28T03:09:44.709379+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '118', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 118, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:29.914256 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-28T03:09:53.953103+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '199', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 199, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:29.914267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-28T03:10:02.886043+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '133', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 133, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:48:29.914294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-28T03:10:11.867061+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '143', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 143, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-28 05:50:17.995666 | orchestrator | 2026-03-28 05:50:17.995753 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-28 05:50:17.995761 | orchestrator | Saturday 28 March 2026 05:48:29 +0000 (0:00:02.988) 0:34:36.339 ******** 2026-03-28 05:50:17.995765 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:50:17.995770 | orchestrator | 2026-03-28 05:50:17.995774 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-28 05:50:17.995778 | orchestrator | Saturday 28 March 2026 05:48:32 +0000 (0:00:03.021) 0:34:39.360 ******** 2026-03-28 05:50:17.995782 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-28 05:50:17.995788 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-28 05:50:17.995795 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-28 05:50:17.995801 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-28 05:50:17.995809 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-28 05:50:17.995814 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-28 05:50:17.995820 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-28 05:50:17.995826 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-28 05:50:17.995833 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-28 05:50:17.995859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-28 05:50:17.995866 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-28 05:50:17.995871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-28 05:50:17.995877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-28 05:50:17.995884 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-28 05:50:17.995890 | orchestrator | 2026-03-28 05:50:17.995896 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-28 05:50:17.995903 | orchestrator | Saturday 28 March 2026 05:49:48 +0000 (0:01:15.787) 0:35:55.148 ******** 2026-03-28 05:50:17.995909 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-28 05:50:17.995916 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-28 05:50:17.995923 | orchestrator | 2026-03-28 05:50:17.995929 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-28 05:50:17.995935 | orchestrator | 2026-03-28 05:50:17.995941 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:50:17.995948 | orchestrator | Saturday 28 March 2026 05:49:54 +0000 (0:00:06.149) 0:36:01.298 ******** 2026-03-28 05:50:17.995954 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-28 05:50:17.995960 | orchestrator | 2026-03-28 05:50:17.995966 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:50:17.995973 | orchestrator | Saturday 28 March 2026 05:49:56 +0000 (0:00:01.177) 0:36:02.476 ******** 2026-03-28 05:50:17.995981 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.995987 | orchestrator | 2026-03-28 05:50:17.995990 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:50:17.995994 | orchestrator | Saturday 28 March 2026 05:49:57 +0000 (0:00:01.515) 0:36:03.993 ******** 2026-03-28 05:50:17.995998 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996002 | orchestrator | 2026-03-28 05:50:17.996006 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:50:17.996010 | orchestrator | Saturday 28 March 2026 05:49:58 +0000 (0:00:01.181) 0:36:05.174 ******** 2026-03-28 05:50:17.996024 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996028 | orchestrator | 2026-03-28 05:50:17.996032 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:50:17.996036 | orchestrator | Saturday 28 March 2026 05:50:00 +0000 (0:00:01.498) 0:36:06.673 ******** 2026-03-28 05:50:17.996039 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996043 | orchestrator | 2026-03-28 05:50:17.996047 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:50:17.996051 | orchestrator | Saturday 28 March 2026 05:50:01 +0000 (0:00:01.173) 0:36:07.846 ******** 2026-03-28 05:50:17.996055 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996058 | orchestrator | 2026-03-28 05:50:17.996062 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:50:17.996066 | orchestrator | Saturday 28 March 2026 05:50:02 +0000 (0:00:01.173) 0:36:09.020 ******** 2026-03-28 05:50:17.996070 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996074 | orchestrator | 2026-03-28 05:50:17.996077 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:50:17.996081 | orchestrator | Saturday 28 March 2026 05:50:03 +0000 (0:00:01.228) 0:36:10.248 ******** 2026-03-28 05:50:17.996085 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:17.996089 | orchestrator | 2026-03-28 05:50:17.996093 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:50:17.996108 | orchestrator | Saturday 28 March 2026 05:50:05 +0000 (0:00:01.199) 0:36:11.447 ******** 2026-03-28 05:50:17.996113 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996117 | orchestrator | 2026-03-28 05:50:17.996126 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:50:17.996130 | orchestrator | Saturday 28 March 2026 05:50:06 +0000 (0:00:01.153) 0:36:12.601 ******** 2026-03-28 05:50:17.996134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:50:17.996137 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:50:17.996141 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:50:17.996145 | orchestrator | 2026-03-28 05:50:17.996149 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:50:17.996153 | orchestrator | Saturday 28 March 2026 05:50:07 +0000 (0:00:01.606) 0:36:14.207 ******** 2026-03-28 05:50:17.996156 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:17.996160 | orchestrator | 2026-03-28 05:50:17.996164 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:50:17.996168 | orchestrator | Saturday 28 March 2026 05:50:08 +0000 (0:00:01.204) 0:36:15.412 ******** 2026-03-28 05:50:17.996171 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:50:17.996175 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:50:17.996179 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:50:17.996183 | orchestrator | 2026-03-28 05:50:17.996186 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:50:17.996190 | orchestrator | Saturday 28 March 2026 05:50:12 +0000 (0:00:03.040) 0:36:18.452 ******** 2026-03-28 05:50:17.996194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 05:50:17.996198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 05:50:17.996202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 05:50:17.996206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:17.996210 | orchestrator | 2026-03-28 05:50:17.996214 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:50:17.996218 | orchestrator | Saturday 28 March 2026 05:50:13 +0000 (0:00:01.464) 0:36:19.917 ******** 2026-03-28 05:50:17.996223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996240 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:17.996244 | orchestrator | 2026-03-28 05:50:17.996248 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:50:17.996253 | orchestrator | Saturday 28 March 2026 05:50:15 +0000 (0:00:02.051) 0:36:21.968 ******** 2026-03-28 05:50:17.996259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:17.996281 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:17.996286 | orchestrator | 2026-03-28 05:50:17.996290 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:50:17.996295 | orchestrator | Saturday 28 March 2026 05:50:16 +0000 (0:00:01.207) 0:36:23.176 ******** 2026-03-28 05:50:17.996303 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:50:09.455657', 'end': '2026-03-28 05:50:09.501386', 'delta': '0:00:00.045729', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:50:37.308054 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:50:10.018420', 'end': '2026-03-28 05:50:10.062123', 'delta': '0:00:00.043703', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:50:37.308186 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:50:10.824452', 'end': '2026-03-28 05:50:10.880901', 'delta': '0:00:00.056449', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:50:37.308203 | orchestrator | 2026-03-28 05:50:37.308216 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:50:37.308228 | orchestrator | Saturday 28 March 2026 05:50:17 +0000 (0:00:01.244) 0:36:24.420 ******** 2026-03-28 05:50:37.308238 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308249 | orchestrator | 2026-03-28 05:50:37.308259 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:50:37.308270 | orchestrator | Saturday 28 March 2026 05:50:19 +0000 (0:00:01.745) 0:36:26.166 ******** 2026-03-28 05:50:37.308279 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308290 | orchestrator | 2026-03-28 05:50:37.308300 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:50:37.308330 | orchestrator | Saturday 28 March 2026 05:50:20 +0000 (0:00:01.261) 0:36:27.428 ******** 2026-03-28 05:50:37.308341 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308351 | orchestrator | 2026-03-28 05:50:37.308361 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:50:37.308370 | orchestrator | Saturday 28 March 2026 05:50:22 +0000 (0:00:01.185) 0:36:28.614 ******** 2026-03-28 05:50:37.308380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:50:37.308390 | orchestrator | 2026-03-28 05:50:37.308415 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:50:37.308479 | orchestrator | Saturday 28 March 2026 05:50:25 +0000 (0:00:02.990) 0:36:31.604 ******** 2026-03-28 05:50:37.308489 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308499 | orchestrator | 2026-03-28 05:50:37.308509 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:50:37.308519 | orchestrator | Saturday 28 March 2026 05:50:26 +0000 (0:00:01.169) 0:36:32.773 ******** 2026-03-28 05:50:37.308528 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308538 | orchestrator | 2026-03-28 05:50:37.308548 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:50:37.308557 | orchestrator | Saturday 28 March 2026 05:50:27 +0000 (0:00:01.184) 0:36:33.958 ******** 2026-03-28 05:50:37.308567 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308579 | orchestrator | 2026-03-28 05:50:37.308590 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:50:37.308601 | orchestrator | Saturday 28 March 2026 05:50:28 +0000 (0:00:01.248) 0:36:35.207 ******** 2026-03-28 05:50:37.308613 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308624 | orchestrator | 2026-03-28 05:50:37.308635 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:50:37.308646 | orchestrator | Saturday 28 March 2026 05:50:29 +0000 (0:00:01.192) 0:36:36.400 ******** 2026-03-28 05:50:37.308657 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308668 | orchestrator | 2026-03-28 05:50:37.308679 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:50:37.308690 | orchestrator | Saturday 28 March 2026 05:50:31 +0000 (0:00:01.181) 0:36:37.581 ******** 2026-03-28 05:50:37.308701 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308712 | orchestrator | 2026-03-28 05:50:37.308724 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:50:37.308736 | orchestrator | Saturday 28 March 2026 05:50:32 +0000 (0:00:01.205) 0:36:38.787 ******** 2026-03-28 05:50:37.308748 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308759 | orchestrator | 2026-03-28 05:50:37.308770 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:50:37.308782 | orchestrator | Saturday 28 March 2026 05:50:33 +0000 (0:00:01.105) 0:36:39.893 ******** 2026-03-28 05:50:37.308793 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308804 | orchestrator | 2026-03-28 05:50:37.308816 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:50:37.308827 | orchestrator | Saturday 28 March 2026 05:50:34 +0000 (0:00:01.177) 0:36:41.070 ******** 2026-03-28 05:50:37.308855 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:37.308866 | orchestrator | 2026-03-28 05:50:37.308877 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:50:37.308889 | orchestrator | Saturday 28 March 2026 05:50:35 +0000 (0:00:01.151) 0:36:42.221 ******** 2026-03-28 05:50:37.308900 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:50:37.308917 | orchestrator | 2026-03-28 05:50:37.308934 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:50:37.308951 | orchestrator | Saturday 28 March 2026 05:50:37 +0000 (0:00:01.274) 0:36:43.496 ******** 2026-03-28 05:50:37.308970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:37.309000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}})  2026-03-28 05:50:37.309019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:50:37.309044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}})  2026-03-28 05:50:37.309064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:37.309082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:37.309114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:50:38.634954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}})  2026-03-28 05:50:38.635125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}})  2026-03-28 05:50:38.635136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:50:38.635208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:50:38.635253 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:50:38.635266 | orchestrator | 2026-03-28 05:50:38.635278 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:50:38.635290 | orchestrator | Saturday 28 March 2026 05:50:38 +0000 (0:00:01.361) 0:36:44.857 ******** 2026-03-28 05:50:38.635303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:38.635331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:50:39.831537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:51:00.281690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:51:00.281808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:51:00.281850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:51:00.281882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:51:00.281910 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.281924 | orchestrator | 2026-03-28 05:51:00.281936 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:51:00.281948 | orchestrator | Saturday 28 March 2026 05:50:39 +0000 (0:00:01.400) 0:36:46.258 ******** 2026-03-28 05:51:00.281959 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:00.281970 | orchestrator | 2026-03-28 05:51:00.281981 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:51:00.281992 | orchestrator | Saturday 28 March 2026 05:50:41 +0000 (0:00:01.528) 0:36:47.786 ******** 2026-03-28 05:51:00.282003 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:00.282073 | orchestrator | 2026-03-28 05:51:00.282087 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:51:00.282098 | orchestrator | Saturday 28 March 2026 05:50:42 +0000 (0:00:01.121) 0:36:48.908 ******** 2026-03-28 05:51:00.282108 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:00.282119 | orchestrator | 2026-03-28 05:51:00.282131 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:51:00.282141 | orchestrator | Saturday 28 March 2026 05:50:43 +0000 (0:00:01.441) 0:36:50.349 ******** 2026-03-28 05:51:00.282152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282163 | orchestrator | 2026-03-28 05:51:00.282174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:51:00.282185 | orchestrator | Saturday 28 March 2026 05:50:45 +0000 (0:00:01.210) 0:36:51.560 ******** 2026-03-28 05:51:00.282196 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282208 | orchestrator | 2026-03-28 05:51:00.282221 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:51:00.282234 | orchestrator | Saturday 28 March 2026 05:50:46 +0000 (0:00:01.325) 0:36:52.886 ******** 2026-03-28 05:51:00.282247 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282259 | orchestrator | 2026-03-28 05:51:00.282272 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:51:00.282285 | orchestrator | Saturday 28 March 2026 05:50:47 +0000 (0:00:01.169) 0:36:54.056 ******** 2026-03-28 05:51:00.282298 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 05:51:00.282311 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 05:51:00.282324 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 05:51:00.282346 | orchestrator | 2026-03-28 05:51:00.282359 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:51:00.282372 | orchestrator | Saturday 28 March 2026 05:50:50 +0000 (0:00:02.558) 0:36:56.614 ******** 2026-03-28 05:51:00.282385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 05:51:00.282398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 05:51:00.282411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 05:51:00.282450 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282463 | orchestrator | 2026-03-28 05:51:00.282475 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:51:00.282489 | orchestrator | Saturday 28 March 2026 05:50:51 +0000 (0:00:01.240) 0:36:57.855 ******** 2026-03-28 05:51:00.282501 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-28 05:51:00.282514 | orchestrator | 2026-03-28 05:51:00.282527 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:51:00.282541 | orchestrator | Saturday 28 March 2026 05:50:52 +0000 (0:00:01.135) 0:36:58.991 ******** 2026-03-28 05:51:00.282554 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282567 | orchestrator | 2026-03-28 05:51:00.282577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:51:00.282588 | orchestrator | Saturday 28 March 2026 05:50:53 +0000 (0:00:01.148) 0:37:00.139 ******** 2026-03-28 05:51:00.282599 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282610 | orchestrator | 2026-03-28 05:51:00.282620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:51:00.282631 | orchestrator | Saturday 28 March 2026 05:50:54 +0000 (0:00:01.281) 0:37:01.421 ******** 2026-03-28 05:51:00.282642 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282652 | orchestrator | 2026-03-28 05:51:00.282663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:51:00.282673 | orchestrator | Saturday 28 March 2026 05:50:56 +0000 (0:00:01.222) 0:37:02.643 ******** 2026-03-28 05:51:00.282684 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:00.282695 | orchestrator | 2026-03-28 05:51:00.282705 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:51:00.282716 | orchestrator | Saturday 28 March 2026 05:50:57 +0000 (0:00:01.231) 0:37:03.875 ******** 2026-03-28 05:51:00.282727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:51:00.282738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:51:00.282748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:51:00.282759 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282770 | orchestrator | 2026-03-28 05:51:00.282781 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:51:00.282791 | orchestrator | Saturday 28 March 2026 05:50:58 +0000 (0:00:01.426) 0:37:05.302 ******** 2026-03-28 05:51:00.282802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:51:00.282813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:51:00.282824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:51:00.282835 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:00.282845 | orchestrator | 2026-03-28 05:51:00.282948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:51:48.855755 | orchestrator | Saturday 28 March 2026 05:51:00 +0000 (0:00:01.402) 0:37:06.704 ******** 2026-03-28 05:51:48.855873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:51:48.855890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:51:48.855902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:51:48.855914 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.855949 | orchestrator | 2026-03-28 05:51:48.855962 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:51:48.855973 | orchestrator | Saturday 28 March 2026 05:51:01 +0000 (0:00:01.469) 0:37:08.174 ******** 2026-03-28 05:51:48.855985 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.855997 | orchestrator | 2026-03-28 05:51:48.856009 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:51:48.856020 | orchestrator | Saturday 28 March 2026 05:51:02 +0000 (0:00:01.168) 0:37:09.342 ******** 2026-03-28 05:51:48.856031 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 05:51:48.856042 | orchestrator | 2026-03-28 05:51:48.856053 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:51:48.856064 | orchestrator | Saturday 28 March 2026 05:51:04 +0000 (0:00:01.354) 0:37:10.697 ******** 2026-03-28 05:51:48.856075 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:51:48.856087 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:51:48.856097 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:51:48.856108 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 05:51:48.856119 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:51:48.856130 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:51:48.856141 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:51:48.856152 | orchestrator | 2026-03-28 05:51:48.856178 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:51:48.856190 | orchestrator | Saturday 28 March 2026 05:51:06 +0000 (0:00:02.169) 0:37:12.867 ******** 2026-03-28 05:51:48.856201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:51:48.856212 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:51:48.856222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:51:48.856233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 05:51:48.856244 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 05:51:48.856255 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:51:48.856265 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:51:48.856276 | orchestrator | 2026-03-28 05:51:48.856287 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-28 05:51:48.856298 | orchestrator | Saturday 28 March 2026 05:51:09 +0000 (0:00:02.731) 0:37:15.598 ******** 2026-03-28 05:51:48.856309 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856320 | orchestrator | 2026-03-28 05:51:48.856330 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-28 05:51:48.856341 | orchestrator | Saturday 28 March 2026 05:51:10 +0000 (0:00:01.533) 0:37:17.131 ******** 2026-03-28 05:51:48.856353 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856364 | orchestrator | 2026-03-28 05:51:48.856375 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-28 05:51:48.856386 | orchestrator | Saturday 28 March 2026 05:51:11 +0000 (0:00:01.145) 0:37:18.277 ******** 2026-03-28 05:51:48.856396 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856407 | orchestrator | 2026-03-28 05:51:48.856418 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-28 05:51:48.856455 | orchestrator | Saturday 28 March 2026 05:51:13 +0000 (0:00:01.263) 0:37:19.540 ******** 2026-03-28 05:51:48.856466 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 05:51:48.856477 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 05:51:48.856496 | orchestrator | 2026-03-28 05:51:48.856508 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:51:48.856519 | orchestrator | Saturday 28 March 2026 05:51:17 +0000 (0:00:04.359) 0:37:23.900 ******** 2026-03-28 05:51:48.856529 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-28 05:51:48.856541 | orchestrator | 2026-03-28 05:51:48.856552 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:51:48.856563 | orchestrator | Saturday 28 March 2026 05:51:18 +0000 (0:00:01.156) 0:37:25.057 ******** 2026-03-28 05:51:48.856574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-28 05:51:48.856585 | orchestrator | 2026-03-28 05:51:48.856596 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:51:48.856607 | orchestrator | Saturday 28 March 2026 05:51:19 +0000 (0:00:01.133) 0:37:26.190 ******** 2026-03-28 05:51:48.856618 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.856629 | orchestrator | 2026-03-28 05:51:48.856639 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:51:48.856650 | orchestrator | Saturday 28 March 2026 05:51:20 +0000 (0:00:01.209) 0:37:27.399 ******** 2026-03-28 05:51:48.856661 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856675 | orchestrator | 2026-03-28 05:51:48.856695 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:51:48.856734 | orchestrator | Saturday 28 March 2026 05:51:22 +0000 (0:00:01.543) 0:37:28.942 ******** 2026-03-28 05:51:48.856753 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856773 | orchestrator | 2026-03-28 05:51:48.856793 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:51:48.856812 | orchestrator | Saturday 28 March 2026 05:51:24 +0000 (0:00:01.602) 0:37:30.545 ******** 2026-03-28 05:51:48.856830 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.856845 | orchestrator | 2026-03-28 05:51:48.856856 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:51:48.856867 | orchestrator | Saturday 28 March 2026 05:51:25 +0000 (0:00:01.624) 0:37:32.170 ******** 2026-03-28 05:51:48.856877 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.856888 | orchestrator | 2026-03-28 05:51:48.856899 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:51:48.856910 | orchestrator | Saturday 28 March 2026 05:51:26 +0000 (0:00:01.130) 0:37:33.300 ******** 2026-03-28 05:51:48.856920 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.856931 | orchestrator | 2026-03-28 05:51:48.856942 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:51:48.856953 | orchestrator | Saturday 28 March 2026 05:51:28 +0000 (0:00:01.179) 0:37:34.480 ******** 2026-03-28 05:51:48.856963 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.856974 | orchestrator | 2026-03-28 05:51:48.856985 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:51:48.856995 | orchestrator | Saturday 28 March 2026 05:51:29 +0000 (0:00:01.170) 0:37:35.651 ******** 2026-03-28 05:51:48.857006 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857017 | orchestrator | 2026-03-28 05:51:48.857028 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:51:48.857038 | orchestrator | Saturday 28 March 2026 05:51:30 +0000 (0:00:01.547) 0:37:37.198 ******** 2026-03-28 05:51:48.857049 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857060 | orchestrator | 2026-03-28 05:51:48.857071 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:51:48.857081 | orchestrator | Saturday 28 March 2026 05:51:32 +0000 (0:00:01.513) 0:37:38.712 ******** 2026-03-28 05:51:48.857092 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857103 | orchestrator | 2026-03-28 05:51:48.857121 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:51:48.857132 | orchestrator | Saturday 28 March 2026 05:51:33 +0000 (0:00:01.160) 0:37:39.872 ******** 2026-03-28 05:51:48.857152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857163 | orchestrator | 2026-03-28 05:51:48.857174 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:51:48.857184 | orchestrator | Saturday 28 March 2026 05:51:34 +0000 (0:00:01.151) 0:37:41.024 ******** 2026-03-28 05:51:48.857195 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857206 | orchestrator | 2026-03-28 05:51:48.857216 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:51:48.857227 | orchestrator | Saturday 28 March 2026 05:51:35 +0000 (0:00:01.136) 0:37:42.160 ******** 2026-03-28 05:51:48.857238 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857248 | orchestrator | 2026-03-28 05:51:48.857259 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:51:48.857270 | orchestrator | Saturday 28 March 2026 05:51:36 +0000 (0:00:01.148) 0:37:43.308 ******** 2026-03-28 05:51:48.857281 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857291 | orchestrator | 2026-03-28 05:51:48.857302 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:51:48.857313 | orchestrator | Saturday 28 March 2026 05:51:38 +0000 (0:00:01.238) 0:37:44.547 ******** 2026-03-28 05:51:48.857323 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857334 | orchestrator | 2026-03-28 05:51:48.857345 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:51:48.857356 | orchestrator | Saturday 28 March 2026 05:51:39 +0000 (0:00:01.123) 0:37:45.670 ******** 2026-03-28 05:51:48.857366 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857377 | orchestrator | 2026-03-28 05:51:48.857388 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:51:48.857399 | orchestrator | Saturday 28 March 2026 05:51:40 +0000 (0:00:01.207) 0:37:46.878 ******** 2026-03-28 05:51:48.857409 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857467 | orchestrator | 2026-03-28 05:51:48.857482 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:51:48.857493 | orchestrator | Saturday 28 March 2026 05:51:41 +0000 (0:00:01.138) 0:37:48.016 ******** 2026-03-28 05:51:48.857503 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857514 | orchestrator | 2026-03-28 05:51:48.857525 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:51:48.857535 | orchestrator | Saturday 28 March 2026 05:51:42 +0000 (0:00:01.191) 0:37:49.208 ******** 2026-03-28 05:51:48.857546 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:51:48.857557 | orchestrator | 2026-03-28 05:51:48.857567 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:51:48.857578 | orchestrator | Saturday 28 March 2026 05:51:44 +0000 (0:00:01.231) 0:37:50.440 ******** 2026-03-28 05:51:48.857589 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857599 | orchestrator | 2026-03-28 05:51:48.857610 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:51:48.857621 | orchestrator | Saturday 28 March 2026 05:51:45 +0000 (0:00:01.321) 0:37:51.762 ******** 2026-03-28 05:51:48.857640 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857659 | orchestrator | 2026-03-28 05:51:48.857677 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:51:48.857696 | orchestrator | Saturday 28 March 2026 05:51:46 +0000 (0:00:01.165) 0:37:52.927 ******** 2026-03-28 05:51:48.857716 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857734 | orchestrator | 2026-03-28 05:51:48.857755 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:51:48.857773 | orchestrator | Saturday 28 March 2026 05:51:47 +0000 (0:00:01.151) 0:37:54.078 ******** 2026-03-28 05:51:48.857793 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:51:48.857811 | orchestrator | 2026-03-28 05:51:48.857842 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:52:38.250708 | orchestrator | Saturday 28 March 2026 05:51:48 +0000 (0:00:01.197) 0:37:55.276 ******** 2026-03-28 05:52:38.250854 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.250872 | orchestrator | 2026-03-28 05:52:38.250885 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:52:38.250896 | orchestrator | Saturday 28 March 2026 05:51:49 +0000 (0:00:01.142) 0:37:56.418 ******** 2026-03-28 05:52:38.250907 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.250918 | orchestrator | 2026-03-28 05:52:38.250929 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:52:38.250940 | orchestrator | Saturday 28 March 2026 05:51:51 +0000 (0:00:01.151) 0:37:57.569 ******** 2026-03-28 05:52:38.250951 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.250962 | orchestrator | 2026-03-28 05:52:38.250973 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:52:38.250985 | orchestrator | Saturday 28 March 2026 05:51:52 +0000 (0:00:01.156) 0:37:58.725 ******** 2026-03-28 05:52:38.250996 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251007 | orchestrator | 2026-03-28 05:52:38.251018 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:52:38.251029 | orchestrator | Saturday 28 March 2026 05:51:53 +0000 (0:00:01.174) 0:37:59.900 ******** 2026-03-28 05:52:38.251040 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251050 | orchestrator | 2026-03-28 05:52:38.251061 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:52:38.251072 | orchestrator | Saturday 28 March 2026 05:51:54 +0000 (0:00:01.208) 0:38:01.108 ******** 2026-03-28 05:52:38.251083 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251094 | orchestrator | 2026-03-28 05:52:38.251105 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:52:38.251116 | orchestrator | Saturday 28 March 2026 05:51:55 +0000 (0:00:01.132) 0:38:02.240 ******** 2026-03-28 05:52:38.251127 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251138 | orchestrator | 2026-03-28 05:52:38.251149 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:52:38.251176 | orchestrator | Saturday 28 March 2026 05:51:56 +0000 (0:00:01.134) 0:38:03.375 ******** 2026-03-28 05:52:38.251187 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251198 | orchestrator | 2026-03-28 05:52:38.251209 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:52:38.251220 | orchestrator | Saturday 28 March 2026 05:51:58 +0000 (0:00:01.153) 0:38:04.529 ******** 2026-03-28 05:52:38.251231 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.251244 | orchestrator | 2026-03-28 05:52:38.251256 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:52:38.251269 | orchestrator | Saturday 28 March 2026 05:52:00 +0000 (0:00:02.034) 0:38:06.563 ******** 2026-03-28 05:52:38.251281 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.251294 | orchestrator | 2026-03-28 05:52:38.251307 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:52:38.251327 | orchestrator | Saturday 28 March 2026 05:52:02 +0000 (0:00:02.229) 0:38:08.793 ******** 2026-03-28 05:52:38.251346 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-28 05:52:38.251367 | orchestrator | 2026-03-28 05:52:38.251386 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:52:38.251450 | orchestrator | Saturday 28 March 2026 05:52:03 +0000 (0:00:01.209) 0:38:10.003 ******** 2026-03-28 05:52:38.251471 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251489 | orchestrator | 2026-03-28 05:52:38.251508 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:52:38.251528 | orchestrator | Saturday 28 March 2026 05:52:04 +0000 (0:00:01.150) 0:38:11.154 ******** 2026-03-28 05:52:38.251547 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251566 | orchestrator | 2026-03-28 05:52:38.251586 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:52:38.251622 | orchestrator | Saturday 28 March 2026 05:52:05 +0000 (0:00:01.235) 0:38:12.389 ******** 2026-03-28 05:52:38.251643 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:52:38.251662 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:52:38.251679 | orchestrator | 2026-03-28 05:52:38.251697 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:52:38.251709 | orchestrator | Saturday 28 March 2026 05:52:07 +0000 (0:00:01.813) 0:38:14.203 ******** 2026-03-28 05:52:38.251720 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.251731 | orchestrator | 2026-03-28 05:52:38.251742 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:52:38.251753 | orchestrator | Saturday 28 March 2026 05:52:09 +0000 (0:00:01.466) 0:38:15.669 ******** 2026-03-28 05:52:38.251764 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251775 | orchestrator | 2026-03-28 05:52:38.251786 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:52:38.251797 | orchestrator | Saturday 28 March 2026 05:52:10 +0000 (0:00:01.196) 0:38:16.866 ******** 2026-03-28 05:52:38.251807 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251818 | orchestrator | 2026-03-28 05:52:38.251829 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:52:38.251840 | orchestrator | Saturday 28 March 2026 05:52:11 +0000 (0:00:01.140) 0:38:18.007 ******** 2026-03-28 05:52:38.251851 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.251861 | orchestrator | 2026-03-28 05:52:38.251872 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:52:38.251883 | orchestrator | Saturday 28 March 2026 05:52:12 +0000 (0:00:01.164) 0:38:19.172 ******** 2026-03-28 05:52:38.251894 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-28 05:52:38.251905 | orchestrator | 2026-03-28 05:52:38.251916 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:52:38.251945 | orchestrator | Saturday 28 March 2026 05:52:13 +0000 (0:00:01.221) 0:38:20.393 ******** 2026-03-28 05:52:38.251956 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.251967 | orchestrator | 2026-03-28 05:52:38.251978 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:52:38.251988 | orchestrator | Saturday 28 March 2026 05:52:15 +0000 (0:00:01.760) 0:38:22.154 ******** 2026-03-28 05:52:38.251999 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:52:38.252010 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:52:38.252021 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:52:38.252032 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252043 | orchestrator | 2026-03-28 05:52:38.252053 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:52:38.252064 | orchestrator | Saturday 28 March 2026 05:52:16 +0000 (0:00:01.142) 0:38:23.296 ******** 2026-03-28 05:52:38.252074 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252085 | orchestrator | 2026-03-28 05:52:38.252096 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:52:38.252106 | orchestrator | Saturday 28 March 2026 05:52:18 +0000 (0:00:01.231) 0:38:24.528 ******** 2026-03-28 05:52:38.252117 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252128 | orchestrator | 2026-03-28 05:52:38.252138 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:52:38.252151 | orchestrator | Saturday 28 March 2026 05:52:19 +0000 (0:00:01.256) 0:38:25.784 ******** 2026-03-28 05:52:38.252169 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252188 | orchestrator | 2026-03-28 05:52:38.252206 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:52:38.252220 | orchestrator | Saturday 28 March 2026 05:52:20 +0000 (0:00:01.179) 0:38:26.964 ******** 2026-03-28 05:52:38.252239 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252250 | orchestrator | 2026-03-28 05:52:38.252269 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:52:38.252280 | orchestrator | Saturday 28 March 2026 05:52:21 +0000 (0:00:01.237) 0:38:28.201 ******** 2026-03-28 05:52:38.252290 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252301 | orchestrator | 2026-03-28 05:52:38.252312 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:52:38.252322 | orchestrator | Saturday 28 March 2026 05:52:22 +0000 (0:00:01.176) 0:38:29.379 ******** 2026-03-28 05:52:38.252333 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.252344 | orchestrator | 2026-03-28 05:52:38.252354 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:52:38.252365 | orchestrator | Saturday 28 March 2026 05:52:25 +0000 (0:00:02.518) 0:38:31.897 ******** 2026-03-28 05:52:38.252376 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.252386 | orchestrator | 2026-03-28 05:52:38.252397 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:52:38.252435 | orchestrator | Saturday 28 March 2026 05:52:26 +0000 (0:00:01.171) 0:38:33.069 ******** 2026-03-28 05:52:38.252446 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-28 05:52:38.252457 | orchestrator | 2026-03-28 05:52:38.252468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:52:38.252479 | orchestrator | Saturday 28 March 2026 05:52:27 +0000 (0:00:01.142) 0:38:34.211 ******** 2026-03-28 05:52:38.252490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252500 | orchestrator | 2026-03-28 05:52:38.252511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:52:38.252522 | orchestrator | Saturday 28 March 2026 05:52:28 +0000 (0:00:01.142) 0:38:35.354 ******** 2026-03-28 05:52:38.252533 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252543 | orchestrator | 2026-03-28 05:52:38.252554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:52:38.252565 | orchestrator | Saturday 28 March 2026 05:52:30 +0000 (0:00:01.135) 0:38:36.490 ******** 2026-03-28 05:52:38.252576 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252587 | orchestrator | 2026-03-28 05:52:38.252597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:52:38.252608 | orchestrator | Saturday 28 March 2026 05:52:31 +0000 (0:00:01.199) 0:38:37.689 ******** 2026-03-28 05:52:38.252619 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252630 | orchestrator | 2026-03-28 05:52:38.252640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:52:38.252651 | orchestrator | Saturday 28 March 2026 05:52:32 +0000 (0:00:01.180) 0:38:38.870 ******** 2026-03-28 05:52:38.252662 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252673 | orchestrator | 2026-03-28 05:52:38.252684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:52:38.252694 | orchestrator | Saturday 28 March 2026 05:52:33 +0000 (0:00:01.138) 0:38:40.008 ******** 2026-03-28 05:52:38.252705 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252716 | orchestrator | 2026-03-28 05:52:38.252727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:52:38.252738 | orchestrator | Saturday 28 March 2026 05:52:34 +0000 (0:00:01.202) 0:38:41.210 ******** 2026-03-28 05:52:38.252749 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252760 | orchestrator | 2026-03-28 05:52:38.252770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:52:38.252781 | orchestrator | Saturday 28 March 2026 05:52:35 +0000 (0:00:01.130) 0:38:42.341 ******** 2026-03-28 05:52:38.252792 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:52:38.252803 | orchestrator | 2026-03-28 05:52:38.252813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:52:38.252831 | orchestrator | Saturday 28 March 2026 05:52:37 +0000 (0:00:01.158) 0:38:43.499 ******** 2026-03-28 05:52:38.252842 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:52:38.252853 | orchestrator | 2026-03-28 05:52:38.252864 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:52:38.252888 | orchestrator | Saturday 28 March 2026 05:52:38 +0000 (0:00:01.171) 0:38:44.671 ******** 2026-03-28 05:53:29.110058 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-28 05:53:29.110181 | orchestrator | 2026-03-28 05:53:29.110198 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:53:29.110211 | orchestrator | Saturday 28 March 2026 05:52:39 +0000 (0:00:01.114) 0:38:45.786 ******** 2026-03-28 05:53:29.110223 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 05:53:29.110235 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 05:53:29.110247 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 05:53:29.110258 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 05:53:29.110269 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 05:53:29.110280 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 05:53:29.110291 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 05:53:29.110302 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:53:29.110314 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:53:29.110325 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:53:29.110336 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:53:29.110347 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:53:29.110410 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:53:29.110422 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:53:29.110432 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 05:53:29.110443 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 05:53:29.110454 | orchestrator | 2026-03-28 05:53:29.110482 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:53:29.110493 | orchestrator | Saturday 28 March 2026 05:52:45 +0000 (0:00:06.610) 0:38:52.397 ******** 2026-03-28 05:53:29.110504 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-28 05:53:29.110515 | orchestrator | 2026-03-28 05:53:29.110526 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 05:53:29.110537 | orchestrator | Saturday 28 March 2026 05:52:47 +0000 (0:00:01.504) 0:38:53.901 ******** 2026-03-28 05:53:29.110548 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 05:53:29.110560 | orchestrator | 2026-03-28 05:53:29.110573 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 05:53:29.110585 | orchestrator | Saturday 28 March 2026 05:52:49 +0000 (0:00:01.576) 0:38:55.478 ******** 2026-03-28 05:53:29.110598 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 05:53:29.110610 | orchestrator | 2026-03-28 05:53:29.110622 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:53:29.110635 | orchestrator | Saturday 28 March 2026 05:52:51 +0000 (0:00:02.022) 0:38:57.501 ******** 2026-03-28 05:53:29.110647 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110660 | orchestrator | 2026-03-28 05:53:29.110673 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:53:29.110685 | orchestrator | Saturday 28 March 2026 05:52:52 +0000 (0:00:01.178) 0:38:58.679 ******** 2026-03-28 05:53:29.110719 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110732 | orchestrator | 2026-03-28 05:53:29.110744 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:53:29.110756 | orchestrator | Saturday 28 March 2026 05:52:53 +0000 (0:00:01.119) 0:38:59.799 ******** 2026-03-28 05:53:29.110768 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110781 | orchestrator | 2026-03-28 05:53:29.110793 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:53:29.110805 | orchestrator | Saturday 28 March 2026 05:52:54 +0000 (0:00:01.142) 0:39:00.942 ******** 2026-03-28 05:53:29.110818 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110829 | orchestrator | 2026-03-28 05:53:29.110842 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:53:29.110854 | orchestrator | Saturday 28 March 2026 05:52:55 +0000 (0:00:01.142) 0:39:02.085 ******** 2026-03-28 05:53:29.110865 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110877 | orchestrator | 2026-03-28 05:53:29.110890 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:53:29.110903 | orchestrator | Saturday 28 March 2026 05:52:56 +0000 (0:00:01.189) 0:39:03.274 ******** 2026-03-28 05:53:29.110915 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110927 | orchestrator | 2026-03-28 05:53:29.110937 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:53:29.110948 | orchestrator | Saturday 28 March 2026 05:52:58 +0000 (0:00:01.196) 0:39:04.470 ******** 2026-03-28 05:53:29.110959 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.110969 | orchestrator | 2026-03-28 05:53:29.110980 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:53:29.110991 | orchestrator | Saturday 28 March 2026 05:52:59 +0000 (0:00:01.144) 0:39:05.614 ******** 2026-03-28 05:53:29.111001 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111012 | orchestrator | 2026-03-28 05:53:29.111023 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:53:29.111033 | orchestrator | Saturday 28 March 2026 05:53:00 +0000 (0:00:01.132) 0:39:06.747 ******** 2026-03-28 05:53:29.111045 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111056 | orchestrator | 2026-03-28 05:53:29.111085 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:53:29.111097 | orchestrator | Saturday 28 March 2026 05:53:01 +0000 (0:00:01.170) 0:39:07.917 ******** 2026-03-28 05:53:29.111108 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111119 | orchestrator | 2026-03-28 05:53:29.111130 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:53:29.111140 | orchestrator | Saturday 28 March 2026 05:53:02 +0000 (0:00:01.232) 0:39:09.150 ******** 2026-03-28 05:53:29.111151 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:53:29.111163 | orchestrator | 2026-03-28 05:53:29.111174 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:53:29.111184 | orchestrator | Saturday 28 March 2026 05:53:03 +0000 (0:00:01.222) 0:39:10.373 ******** 2026-03-28 05:53:29.111195 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 05:53:29.111206 | orchestrator | 2026-03-28 05:53:29.111217 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:53:29.111228 | orchestrator | Saturday 28 March 2026 05:53:08 +0000 (0:00:04.450) 0:39:14.824 ******** 2026-03-28 05:53:29.111239 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 05:53:29.111250 | orchestrator | 2026-03-28 05:53:29.111260 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:53:29.111271 | orchestrator | Saturday 28 March 2026 05:53:09 +0000 (0:00:01.277) 0:39:16.102 ******** 2026-03-28 05:53:29.111290 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-28 05:53:29.111313 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-28 05:53:29.111326 | orchestrator | 2026-03-28 05:53:29.111337 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:53:29.111348 | orchestrator | Saturday 28 March 2026 05:53:17 +0000 (0:00:07.863) 0:39:23.965 ******** 2026-03-28 05:53:29.111392 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111403 | orchestrator | 2026-03-28 05:53:29.111414 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:53:29.111425 | orchestrator | Saturday 28 March 2026 05:53:18 +0000 (0:00:01.178) 0:39:25.144 ******** 2026-03-28 05:53:29.111436 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111446 | orchestrator | 2026-03-28 05:53:29.111458 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:53:29.111469 | orchestrator | Saturday 28 March 2026 05:53:19 +0000 (0:00:01.200) 0:39:26.344 ******** 2026-03-28 05:53:29.111480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111490 | orchestrator | 2026-03-28 05:53:29.111501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:53:29.111512 | orchestrator | Saturday 28 March 2026 05:53:21 +0000 (0:00:01.230) 0:39:27.574 ******** 2026-03-28 05:53:29.111522 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111533 | orchestrator | 2026-03-28 05:53:29.111544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:53:29.111555 | orchestrator | Saturday 28 March 2026 05:53:22 +0000 (0:00:01.219) 0:39:28.794 ******** 2026-03-28 05:53:29.111565 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111576 | orchestrator | 2026-03-28 05:53:29.111587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:53:29.111597 | orchestrator | Saturday 28 March 2026 05:53:23 +0000 (0:00:01.189) 0:39:29.983 ******** 2026-03-28 05:53:29.111608 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:53:29.111619 | orchestrator | 2026-03-28 05:53:29.111630 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:53:29.111641 | orchestrator | Saturday 28 March 2026 05:53:24 +0000 (0:00:01.245) 0:39:31.230 ******** 2026-03-28 05:53:29.111651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:53:29.111662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:53:29.111673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:53:29.111684 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111695 | orchestrator | 2026-03-28 05:53:29.111706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:53:29.111717 | orchestrator | Saturday 28 March 2026 05:53:26 +0000 (0:00:01.422) 0:39:32.652 ******** 2026-03-28 05:53:29.111728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:53:29.111738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:53:29.111749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:53:29.111760 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:53:29.111771 | orchestrator | 2026-03-28 05:53:29.111782 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:53:29.111792 | orchestrator | Saturday 28 March 2026 05:53:27 +0000 (0:00:01.439) 0:39:34.092 ******** 2026-03-28 05:53:29.111810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 05:53:29.111821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 05:53:29.111840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 05:54:29.987087 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.987195 | orchestrator | 2026-03-28 05:54:29.987212 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:54:29.987224 | orchestrator | Saturday 28 March 2026 05:53:29 +0000 (0:00:01.437) 0:39:35.530 ******** 2026-03-28 05:54:29.987234 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.987246 | orchestrator | 2026-03-28 05:54:29.987256 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:54:29.987266 | orchestrator | Saturday 28 March 2026 05:53:30 +0000 (0:00:01.179) 0:39:36.710 ******** 2026-03-28 05:54:29.987276 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 05:54:29.987286 | orchestrator | 2026-03-28 05:54:29.987296 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:54:29.987398 | orchestrator | Saturday 28 March 2026 05:53:32 +0000 (0:00:01.882) 0:39:38.592 ******** 2026-03-28 05:54:29.987408 | orchestrator | changed: [testbed-node-3] 2026-03-28 05:54:29.987418 | orchestrator | 2026-03-28 05:54:29.987428 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 05:54:29.987438 | orchestrator | Saturday 28 March 2026 05:53:33 +0000 (0:00:01.795) 0:39:40.388 ******** 2026-03-28 05:54:29.987449 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.987458 | orchestrator | 2026-03-28 05:54:29.987469 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:54:29.987480 | orchestrator | Saturday 28 March 2026 05:53:35 +0000 (0:00:01.172) 0:39:41.560 ******** 2026-03-28 05:54:29.987490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:54:29.987501 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:54:29.987511 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:54:29.987520 | orchestrator | 2026-03-28 05:54:29.987547 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 05:54:29.987557 | orchestrator | Saturday 28 March 2026 05:53:36 +0000 (0:00:01.752) 0:39:43.313 ******** 2026-03-28 05:54:29.987567 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-28 05:54:29.987577 | orchestrator | 2026-03-28 05:54:29.987586 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 05:54:29.987596 | orchestrator | Saturday 28 March 2026 05:53:38 +0000 (0:00:01.529) 0:39:44.843 ******** 2026-03-28 05:54:29.987606 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.987616 | orchestrator | 2026-03-28 05:54:29.987627 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 05:54:29.987644 | orchestrator | Saturday 28 March 2026 05:53:39 +0000 (0:00:01.123) 0:39:45.966 ******** 2026-03-28 05:54:29.987661 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.987677 | orchestrator | 2026-03-28 05:54:29.987694 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 05:54:29.987711 | orchestrator | Saturday 28 March 2026 05:53:40 +0000 (0:00:01.131) 0:39:47.098 ******** 2026-03-28 05:54:29.987729 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.987745 | orchestrator | 2026-03-28 05:54:29.987755 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 05:54:29.987765 | orchestrator | Saturday 28 March 2026 05:53:42 +0000 (0:00:01.490) 0:39:48.589 ******** 2026-03-28 05:54:29.987775 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.987785 | orchestrator | 2026-03-28 05:54:29.987795 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 05:54:29.987805 | orchestrator | Saturday 28 March 2026 05:53:43 +0000 (0:00:01.230) 0:39:49.819 ******** 2026-03-28 05:54:29.987814 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 05:54:29.987849 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 05:54:29.987859 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 05:54:29.987869 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 05:54:29.987879 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 05:54:29.987889 | orchestrator | 2026-03-28 05:54:29.987899 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 05:54:29.987908 | orchestrator | Saturday 28 March 2026 05:53:46 +0000 (0:00:02.989) 0:39:52.808 ******** 2026-03-28 05:54:29.987918 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.987928 | orchestrator | 2026-03-28 05:54:29.987938 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 05:54:29.987948 | orchestrator | Saturday 28 March 2026 05:53:47 +0000 (0:00:01.217) 0:39:54.026 ******** 2026-03-28 05:54:29.987957 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-28 05:54:29.987967 | orchestrator | 2026-03-28 05:54:29.987977 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 05:54:29.987987 | orchestrator | Saturday 28 March 2026 05:53:49 +0000 (0:00:01.694) 0:39:55.720 ******** 2026-03-28 05:54:29.987996 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 05:54:29.988006 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-28 05:54:29.988016 | orchestrator | 2026-03-28 05:54:29.988026 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 05:54:29.988035 | orchestrator | Saturday 28 March 2026 05:53:51 +0000 (0:00:01.808) 0:39:57.529 ******** 2026-03-28 05:54:29.988045 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:54:29.988055 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 05:54:29.988066 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 05:54:29.988076 | orchestrator | 2026-03-28 05:54:29.988103 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 05:54:29.988114 | orchestrator | Saturday 28 March 2026 05:53:54 +0000 (0:00:03.348) 0:40:00.877 ******** 2026-03-28 05:54:29.988124 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-28 05:54:29.988134 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 05:54:29.988143 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988153 | orchestrator | 2026-03-28 05:54:29.988163 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 05:54:29.988172 | orchestrator | Saturday 28 March 2026 05:53:56 +0000 (0:00:01.969) 0:40:02.847 ******** 2026-03-28 05:54:29.988182 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988192 | orchestrator | 2026-03-28 05:54:29.988201 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 05:54:29.988211 | orchestrator | Saturday 28 March 2026 05:53:57 +0000 (0:00:01.319) 0:40:04.166 ******** 2026-03-28 05:54:29.988221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988231 | orchestrator | 2026-03-28 05:54:29.988249 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 05:54:29.988260 | orchestrator | Saturday 28 March 2026 05:53:58 +0000 (0:00:01.140) 0:40:05.306 ******** 2026-03-28 05:54:29.988270 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988279 | orchestrator | 2026-03-28 05:54:29.988289 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 05:54:29.988322 | orchestrator | Saturday 28 March 2026 05:54:00 +0000 (0:00:01.263) 0:40:06.570 ******** 2026-03-28 05:54:29.988332 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-28 05:54:29.988342 | orchestrator | 2026-03-28 05:54:29.988352 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 05:54:29.988369 | orchestrator | Saturday 28 March 2026 05:54:01 +0000 (0:00:01.525) 0:40:08.095 ******** 2026-03-28 05:54:29.988379 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988389 | orchestrator | 2026-03-28 05:54:29.988405 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 05:54:29.988415 | orchestrator | Saturday 28 March 2026 05:54:03 +0000 (0:00:01.469) 0:40:09.565 ******** 2026-03-28 05:54:29.988425 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988434 | orchestrator | 2026-03-28 05:54:29.988444 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 05:54:29.988454 | orchestrator | Saturday 28 March 2026 05:54:06 +0000 (0:00:03.515) 0:40:13.080 ******** 2026-03-28 05:54:29.988463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-28 05:54:29.988473 | orchestrator | 2026-03-28 05:54:29.988483 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 05:54:29.988492 | orchestrator | Saturday 28 March 2026 05:54:08 +0000 (0:00:01.621) 0:40:14.702 ******** 2026-03-28 05:54:29.988502 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988511 | orchestrator | 2026-03-28 05:54:29.988521 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 05:54:29.988531 | orchestrator | Saturday 28 March 2026 05:54:10 +0000 (0:00:02.002) 0:40:16.705 ******** 2026-03-28 05:54:29.988541 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988550 | orchestrator | 2026-03-28 05:54:29.988560 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 05:54:29.988570 | orchestrator | Saturday 28 March 2026 05:54:12 +0000 (0:00:01.953) 0:40:18.659 ******** 2026-03-28 05:54:29.988579 | orchestrator | ok: [testbed-node-3] 2026-03-28 05:54:29.988589 | orchestrator | 2026-03-28 05:54:29.988598 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 05:54:29.988612 | orchestrator | Saturday 28 March 2026 05:54:14 +0000 (0:00:02.245) 0:40:20.905 ******** 2026-03-28 05:54:29.988629 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988646 | orchestrator | 2026-03-28 05:54:29.988657 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 05:54:29.988667 | orchestrator | Saturday 28 March 2026 05:54:15 +0000 (0:00:01.177) 0:40:22.083 ******** 2026-03-28 05:54:29.988676 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988686 | orchestrator | 2026-03-28 05:54:29.988696 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 05:54:29.988705 | orchestrator | Saturday 28 March 2026 05:54:16 +0000 (0:00:01.155) 0:40:23.238 ******** 2026-03-28 05:54:29.988715 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-28 05:54:29.988725 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 05:54:29.988734 | orchestrator | 2026-03-28 05:54:29.988744 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 05:54:29.988753 | orchestrator | Saturday 28 March 2026 05:54:18 +0000 (0:00:01.880) 0:40:25.119 ******** 2026-03-28 05:54:29.988763 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-28 05:54:29.988773 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 05:54:29.988782 | orchestrator | 2026-03-28 05:54:29.988792 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 05:54:29.988814 | orchestrator | Saturday 28 March 2026 05:54:21 +0000 (0:00:02.875) 0:40:27.994 ******** 2026-03-28 05:54:29.988824 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-28 05:54:29.988843 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-28 05:54:29.988853 | orchestrator | 2026-03-28 05:54:29.988863 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 05:54:29.988872 | orchestrator | Saturday 28 March 2026 05:54:26 +0000 (0:00:04.737) 0:40:32.732 ******** 2026-03-28 05:54:29.988882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988891 | orchestrator | 2026-03-28 05:54:29.988901 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 05:54:29.988918 | orchestrator | Saturday 28 March 2026 05:54:27 +0000 (0:00:01.226) 0:40:33.958 ******** 2026-03-28 05:54:29.988927 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988937 | orchestrator | 2026-03-28 05:54:29.988947 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 05:54:29.988956 | orchestrator | Saturday 28 March 2026 05:54:28 +0000 (0:00:01.231) 0:40:35.189 ******** 2026-03-28 05:54:29.988966 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:54:29.988976 | orchestrator | 2026-03-28 05:54:29.988994 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-28 05:55:17.351836 | orchestrator | Saturday 28 March 2026 05:54:29 +0000 (0:00:01.221) 0:40:36.411 ******** 2026-03-28 05:55:17.351960 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.351988 | orchestrator | 2026-03-28 05:55:17.352007 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-28 05:55:17.352027 | orchestrator | Saturday 28 March 2026 05:54:31 +0000 (0:00:01.204) 0:40:37.616 ******** 2026-03-28 05:55:17.352045 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352064 | orchestrator | 2026-03-28 05:55:17.352082 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-28 05:55:17.352100 | orchestrator | Saturday 28 March 2026 05:54:32 +0000 (0:00:01.167) 0:40:38.784 ******** 2026-03-28 05:55:17.352119 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-28 05:55:17.352141 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-28 05:55:17.352161 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-28 05:55:17.352181 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-28 05:55:17.352200 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:55:17.352218 | orchestrator | 2026-03-28 05:55:17.352237 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 05:55:17.352256 | orchestrator | Saturday 28 March 2026 05:54:46 +0000 (0:00:14.418) 0:40:53.202 ******** 2026-03-28 05:55:17.352309 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352329 | orchestrator | 2026-03-28 05:55:17.352368 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 05:55:17.352389 | orchestrator | Saturday 28 March 2026 05:54:47 +0000 (0:00:01.176) 0:40:54.379 ******** 2026-03-28 05:55:17.352407 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352426 | orchestrator | 2026-03-28 05:55:17.352444 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 05:55:17.352462 | orchestrator | Saturday 28 March 2026 05:54:49 +0000 (0:00:01.133) 0:40:55.513 ******** 2026-03-28 05:55:17.352480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352499 | orchestrator | 2026-03-28 05:55:17.352517 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 05:55:17.352536 | orchestrator | Saturday 28 March 2026 05:54:50 +0000 (0:00:01.152) 0:40:56.666 ******** 2026-03-28 05:55:17.352554 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352573 | orchestrator | 2026-03-28 05:55:17.352591 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 05:55:17.352610 | orchestrator | Saturday 28 March 2026 05:54:51 +0000 (0:00:01.138) 0:40:57.805 ******** 2026-03-28 05:55:17.352630 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352647 | orchestrator | 2026-03-28 05:55:17.352665 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 05:55:17.352685 | orchestrator | Saturday 28 March 2026 05:54:52 +0000 (0:00:01.127) 0:40:58.932 ******** 2026-03-28 05:55:17.352704 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352723 | orchestrator | 2026-03-28 05:55:17.352741 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 05:55:17.352792 | orchestrator | Saturday 28 March 2026 05:54:53 +0000 (0:00:01.151) 0:41:00.084 ******** 2026-03-28 05:55:17.352813 | orchestrator | skipping: [testbed-node-3] 2026-03-28 05:55:17.352829 | orchestrator | 2026-03-28 05:55:17.352847 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-28 05:55:17.352866 | orchestrator | 2026-03-28 05:55:17.352885 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:55:17.352901 | orchestrator | Saturday 28 March 2026 05:54:54 +0000 (0:00:00.975) 0:41:01.060 ******** 2026-03-28 05:55:17.352918 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-28 05:55:17.352936 | orchestrator | 2026-03-28 05:55:17.352954 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:55:17.352972 | orchestrator | Saturday 28 March 2026 05:54:55 +0000 (0:00:01.320) 0:41:02.380 ******** 2026-03-28 05:55:17.352990 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353010 | orchestrator | 2026-03-28 05:55:17.353030 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:55:17.353048 | orchestrator | Saturday 28 March 2026 05:54:57 +0000 (0:00:01.483) 0:41:03.863 ******** 2026-03-28 05:55:17.353066 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353085 | orchestrator | 2026-03-28 05:55:17.353103 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:55:17.353120 | orchestrator | Saturday 28 March 2026 05:54:58 +0000 (0:00:01.145) 0:41:05.009 ******** 2026-03-28 05:55:17.353138 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353157 | orchestrator | 2026-03-28 05:55:17.353175 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:55:17.353193 | orchestrator | Saturday 28 March 2026 05:55:00 +0000 (0:00:01.463) 0:41:06.473 ******** 2026-03-28 05:55:17.353213 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353230 | orchestrator | 2026-03-28 05:55:17.353242 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:55:17.353252 | orchestrator | Saturday 28 March 2026 05:55:01 +0000 (0:00:01.113) 0:41:07.586 ******** 2026-03-28 05:55:17.353302 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353324 | orchestrator | 2026-03-28 05:55:17.353342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:55:17.353360 | orchestrator | Saturday 28 March 2026 05:55:02 +0000 (0:00:01.114) 0:41:08.701 ******** 2026-03-28 05:55:17.353372 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353383 | orchestrator | 2026-03-28 05:55:17.353395 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:55:17.353407 | orchestrator | Saturday 28 March 2026 05:55:03 +0000 (0:00:01.158) 0:41:09.860 ******** 2026-03-28 05:55:17.353442 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:17.353454 | orchestrator | 2026-03-28 05:55:17.353465 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:55:17.353476 | orchestrator | Saturday 28 March 2026 05:55:04 +0000 (0:00:01.147) 0:41:11.007 ******** 2026-03-28 05:55:17.353486 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353497 | orchestrator | 2026-03-28 05:55:17.353508 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:55:17.353519 | orchestrator | Saturday 28 March 2026 05:55:05 +0000 (0:00:01.198) 0:41:12.206 ******** 2026-03-28 05:55:17.353530 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:55:17.353541 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:55:17.353552 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:55:17.353562 | orchestrator | 2026-03-28 05:55:17.353573 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:55:17.353584 | orchestrator | Saturday 28 March 2026 05:55:07 +0000 (0:00:01.950) 0:41:14.156 ******** 2026-03-28 05:55:17.353595 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:17.353621 | orchestrator | 2026-03-28 05:55:17.353632 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:55:17.353643 | orchestrator | Saturday 28 March 2026 05:55:08 +0000 (0:00:01.231) 0:41:15.388 ******** 2026-03-28 05:55:17.353654 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:55:17.353665 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:55:17.353686 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:55:17.353697 | orchestrator | 2026-03-28 05:55:17.353708 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:55:17.353719 | orchestrator | Saturday 28 March 2026 05:55:12 +0000 (0:00:03.147) 0:41:18.535 ******** 2026-03-28 05:55:17.353729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 05:55:17.353741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 05:55:17.353752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 05:55:17.353763 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:17.353774 | orchestrator | 2026-03-28 05:55:17.353785 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:55:17.353795 | orchestrator | Saturday 28 March 2026 05:55:13 +0000 (0:00:01.852) 0:41:20.388 ******** 2026-03-28 05:55:17.353808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353845 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:17.353856 | orchestrator | 2026-03-28 05:55:17.353867 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:55:17.353878 | orchestrator | Saturday 28 March 2026 05:55:16 +0000 (0:00:02.109) 0:41:22.498 ******** 2026-03-28 05:55:17.353892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:17.353929 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:17.353940 | orchestrator | 2026-03-28 05:55:17.353958 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 05:55:36.896496 | orchestrator | Saturday 28 March 2026 05:55:17 +0000 (0:00:01.271) 0:41:23.770 ******** 2026-03-28 05:55:36.896595 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:55:09.458249', 'end': '2026-03-28 05:55:09.510421', 'delta': '0:00:00.052172', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 05:55:36.896625 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:55:10.263795', 'end': '2026-03-28 05:55:10.311115', 'delta': '0:00:00.047320', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 05:55:36.896635 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:55:10.803777', 'end': '2026-03-28 05:55:10.857122', 'delta': '0:00:00.053345', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 05:55:36.896643 | orchestrator | 2026-03-28 05:55:36.896651 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 05:55:36.896659 | orchestrator | Saturday 28 March 2026 05:55:18 +0000 (0:00:01.200) 0:41:24.970 ******** 2026-03-28 05:55:36.896666 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.896675 | orchestrator | 2026-03-28 05:55:36.896683 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 05:55:36.896690 | orchestrator | Saturday 28 March 2026 05:55:19 +0000 (0:00:01.300) 0:41:26.270 ******** 2026-03-28 05:55:36.896698 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.896705 | orchestrator | 2026-03-28 05:55:36.896713 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 05:55:36.896720 | orchestrator | Saturday 28 March 2026 05:55:21 +0000 (0:00:01.263) 0:41:27.534 ******** 2026-03-28 05:55:36.896727 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.896735 | orchestrator | 2026-03-28 05:55:36.896743 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 05:55:36.896750 | orchestrator | Saturday 28 March 2026 05:55:22 +0000 (0:00:01.157) 0:41:28.692 ******** 2026-03-28 05:55:36.896758 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:55:36.896766 | orchestrator | 2026-03-28 05:55:36.896773 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:55:36.896780 | orchestrator | Saturday 28 March 2026 05:55:24 +0000 (0:00:02.095) 0:41:30.787 ******** 2026-03-28 05:55:36.896788 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.896811 | orchestrator | 2026-03-28 05:55:36.896818 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 05:55:36.896826 | orchestrator | Saturday 28 March 2026 05:55:26 +0000 (0:00:01.651) 0:41:32.438 ******** 2026-03-28 05:55:36.896837 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.896850 | orchestrator | 2026-03-28 05:55:36.896862 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 05:55:36.896875 | orchestrator | Saturday 28 March 2026 05:55:27 +0000 (0:00:01.175) 0:41:33.613 ******** 2026-03-28 05:55:36.896887 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.896899 | orchestrator | 2026-03-28 05:55:36.896910 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 05:55:36.896924 | orchestrator | Saturday 28 March 2026 05:55:28 +0000 (0:00:01.243) 0:41:34.857 ******** 2026-03-28 05:55:36.896935 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.896942 | orchestrator | 2026-03-28 05:55:36.896950 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 05:55:36.896972 | orchestrator | Saturday 28 March 2026 05:55:29 +0000 (0:00:01.163) 0:41:36.021 ******** 2026-03-28 05:55:36.896980 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.896987 | orchestrator | 2026-03-28 05:55:36.896995 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 05:55:36.897002 | orchestrator | Saturday 28 March 2026 05:55:30 +0000 (0:00:01.146) 0:41:37.168 ******** 2026-03-28 05:55:36.897009 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.897016 | orchestrator | 2026-03-28 05:55:36.897023 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 05:55:36.897030 | orchestrator | Saturday 28 March 2026 05:55:31 +0000 (0:00:01.238) 0:41:38.406 ******** 2026-03-28 05:55:36.897037 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.897044 | orchestrator | 2026-03-28 05:55:36.897051 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 05:55:36.897059 | orchestrator | Saturday 28 March 2026 05:55:33 +0000 (0:00:01.113) 0:41:39.520 ******** 2026-03-28 05:55:36.897067 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.897074 | orchestrator | 2026-03-28 05:55:36.897081 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 05:55:36.897088 | orchestrator | Saturday 28 March 2026 05:55:34 +0000 (0:00:01.241) 0:41:40.761 ******** 2026-03-28 05:55:36.897095 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:36.897102 | orchestrator | 2026-03-28 05:55:36.897109 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 05:55:36.897117 | orchestrator | Saturday 28 March 2026 05:55:35 +0000 (0:00:01.145) 0:41:41.907 ******** 2026-03-28 05:55:36.897124 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:36.897132 | orchestrator | 2026-03-28 05:55:36.897139 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 05:55:36.897151 | orchestrator | Saturday 28 March 2026 05:55:36 +0000 (0:00:01.179) 0:41:43.086 ******** 2026-03-28 05:55:36.897160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:36.897171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}})  2026-03-28 05:55:36.897190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:55:36.897204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}})  2026-03-28 05:55:36.897226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.249788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.249884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 05:55:38.249916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.249928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:55:38.249956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.249966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}})  2026-03-28 05:55:38.249977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}})  2026-03-28 05:55:38.250003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.250081 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 05:55:38.250102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.250112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 05:55:38.250121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 05:55:38.250131 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:38.250141 | orchestrator | 2026-03-28 05:55:38.250152 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 05:55:38.250162 | orchestrator | Saturday 28 March 2026 05:55:38 +0000 (0:00:01.374) 0:41:44.460 ******** 2026-03-28 05:55:38.250180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:39.463723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 05:55:58.041416 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.041434 | orchestrator | 2026-03-28 05:55:58.041451 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 05:55:58.041468 | orchestrator | Saturday 28 March 2026 05:55:39 +0000 (0:00:01.430) 0:41:45.891 ******** 2026-03-28 05:55:58.041483 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:58.041499 | orchestrator | 2026-03-28 05:55:58.041514 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 05:55:58.041529 | orchestrator | Saturday 28 March 2026 05:55:40 +0000 (0:00:01.531) 0:41:47.423 ******** 2026-03-28 05:55:58.041544 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:58.041560 | orchestrator | 2026-03-28 05:55:58.041575 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:55:58.041591 | orchestrator | Saturday 28 March 2026 05:55:42 +0000 (0:00:01.132) 0:41:48.555 ******** 2026-03-28 05:55:58.041606 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:58.041621 | orchestrator | 2026-03-28 05:55:58.041636 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:55:58.041650 | orchestrator | Saturday 28 March 2026 05:55:43 +0000 (0:00:01.483) 0:41:50.039 ******** 2026-03-28 05:55:58.041666 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.041681 | orchestrator | 2026-03-28 05:55:58.041696 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 05:55:58.041713 | orchestrator | Saturday 28 March 2026 05:55:44 +0000 (0:00:01.191) 0:41:51.231 ******** 2026-03-28 05:55:58.041729 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.041746 | orchestrator | 2026-03-28 05:55:58.041762 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 05:55:58.041779 | orchestrator | Saturday 28 March 2026 05:55:46 +0000 (0:00:01.232) 0:41:52.463 ******** 2026-03-28 05:55:58.041795 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.041811 | orchestrator | 2026-03-28 05:55:58.041828 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 05:55:58.041844 | orchestrator | Saturday 28 March 2026 05:55:47 +0000 (0:00:01.195) 0:41:53.659 ******** 2026-03-28 05:55:58.041860 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 05:55:58.041877 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 05:55:58.041921 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 05:55:58.041938 | orchestrator | 2026-03-28 05:55:58.041954 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 05:55:58.041969 | orchestrator | Saturday 28 March 2026 05:55:49 +0000 (0:00:02.046) 0:41:55.706 ******** 2026-03-28 05:55:58.041985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 05:55:58.042001 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 05:55:58.042014 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 05:55:58.042094 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.042111 | orchestrator | 2026-03-28 05:55:58.042127 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 05:55:58.042143 | orchestrator | Saturday 28 March 2026 05:55:50 +0000 (0:00:01.224) 0:41:56.930 ******** 2026-03-28 05:55:58.042171 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-28 05:55:58.042189 | orchestrator | 2026-03-28 05:55:58.042204 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:55:58.042220 | orchestrator | Saturday 28 March 2026 05:55:51 +0000 (0:00:01.266) 0:41:58.197 ******** 2026-03-28 05:55:58.042257 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.042272 | orchestrator | 2026-03-28 05:55:58.042287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:55:58.042303 | orchestrator | Saturday 28 March 2026 05:55:52 +0000 (0:00:01.159) 0:41:59.357 ******** 2026-03-28 05:55:58.042318 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.042333 | orchestrator | 2026-03-28 05:55:58.042349 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:55:58.042365 | orchestrator | Saturday 28 March 2026 05:55:54 +0000 (0:00:01.144) 0:42:00.501 ******** 2026-03-28 05:55:58.042380 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:55:58.042395 | orchestrator | 2026-03-28 05:55:58.042410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:55:58.042425 | orchestrator | Saturday 28 March 2026 05:55:55 +0000 (0:00:01.320) 0:42:01.822 ******** 2026-03-28 05:55:58.042440 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:55:58.042455 | orchestrator | 2026-03-28 05:55:58.042470 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:55:58.042485 | orchestrator | Saturday 28 March 2026 05:55:56 +0000 (0:00:01.222) 0:42:03.045 ******** 2026-03-28 05:55:58.042512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:56:38.350985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:56:38.351123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:56:38.351140 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.351155 | orchestrator | 2026-03-28 05:56:38.351168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:56:38.351180 | orchestrator | Saturday 28 March 2026 05:55:58 +0000 (0:00:01.419) 0:42:04.464 ******** 2026-03-28 05:56:38.351191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:56:38.351202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:56:38.351272 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:56:38.351283 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.351294 | orchestrator | 2026-03-28 05:56:38.351305 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:56:38.351316 | orchestrator | Saturday 28 March 2026 05:55:59 +0000 (0:00:01.431) 0:42:05.896 ******** 2026-03-28 05:56:38.351327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:56:38.351338 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:56:38.351349 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:56:38.351360 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.351371 | orchestrator | 2026-03-28 05:56:38.351382 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:56:38.351393 | orchestrator | Saturday 28 March 2026 05:56:00 +0000 (0:00:01.418) 0:42:07.315 ******** 2026-03-28 05:56:38.351404 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.351416 | orchestrator | 2026-03-28 05:56:38.351427 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:56:38.351438 | orchestrator | Saturday 28 March 2026 05:56:02 +0000 (0:00:01.136) 0:42:08.451 ******** 2026-03-28 05:56:38.351449 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 05:56:38.351460 | orchestrator | 2026-03-28 05:56:38.351470 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 05:56:38.351481 | orchestrator | Saturday 28 March 2026 05:56:03 +0000 (0:00:01.362) 0:42:09.814 ******** 2026-03-28 05:56:38.351516 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:56:38.351531 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:56:38.351543 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:56:38.351555 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:56:38.351568 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 05:56:38.351581 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:56:38.351594 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:56:38.351606 | orchestrator | 2026-03-28 05:56:38.351692 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 05:56:38.351705 | orchestrator | Saturday 28 March 2026 05:56:05 +0000 (0:00:02.224) 0:42:12.039 ******** 2026-03-28 05:56:38.351718 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:56:38.351731 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:56:38.351743 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:56:38.351756 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 05:56:38.351768 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 05:56:38.351781 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 05:56:38.351793 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 05:56:38.351806 | orchestrator | 2026-03-28 05:56:38.351819 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-28 05:56:38.351831 | orchestrator | Saturday 28 March 2026 05:56:07 +0000 (0:00:02.393) 0:42:14.432 ******** 2026-03-28 05:56:38.351843 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.351857 | orchestrator | 2026-03-28 05:56:38.351869 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-28 05:56:38.351880 | orchestrator | Saturday 28 March 2026 05:56:09 +0000 (0:00:01.117) 0:42:15.550 ******** 2026-03-28 05:56:38.351891 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.351902 | orchestrator | 2026-03-28 05:56:38.351912 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-28 05:56:38.351924 | orchestrator | Saturday 28 March 2026 05:56:09 +0000 (0:00:00.770) 0:42:16.321 ******** 2026-03-28 05:56:38.351935 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.351946 | orchestrator | 2026-03-28 05:56:38.351956 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-28 05:56:38.351967 | orchestrator | Saturday 28 March 2026 05:56:10 +0000 (0:00:00.899) 0:42:17.221 ******** 2026-03-28 05:56:38.351978 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 05:56:38.351989 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 05:56:38.352001 | orchestrator | 2026-03-28 05:56:38.352012 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 05:56:38.352022 | orchestrator | Saturday 28 March 2026 05:56:14 +0000 (0:00:03.692) 0:42:20.913 ******** 2026-03-28 05:56:38.352033 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-28 05:56:38.352045 | orchestrator | 2026-03-28 05:56:38.352056 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 05:56:38.352084 | orchestrator | Saturday 28 March 2026 05:56:15 +0000 (0:00:01.215) 0:42:22.128 ******** 2026-03-28 05:56:38.352103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-28 05:56:38.352115 | orchestrator | 2026-03-28 05:56:38.352126 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 05:56:38.352146 | orchestrator | Saturday 28 March 2026 05:56:16 +0000 (0:00:01.226) 0:42:23.355 ******** 2026-03-28 05:56:38.352157 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352168 | orchestrator | 2026-03-28 05:56:38.352179 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 05:56:38.352190 | orchestrator | Saturday 28 March 2026 05:56:18 +0000 (0:00:01.196) 0:42:24.552 ******** 2026-03-28 05:56:38.352201 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352248 | orchestrator | 2026-03-28 05:56:38.352260 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 05:56:38.352271 | orchestrator | Saturday 28 March 2026 05:56:19 +0000 (0:00:01.504) 0:42:26.057 ******** 2026-03-28 05:56:38.352282 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352293 | orchestrator | 2026-03-28 05:56:38.352304 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 05:56:38.352315 | orchestrator | Saturday 28 March 2026 05:56:21 +0000 (0:00:01.612) 0:42:27.669 ******** 2026-03-28 05:56:38.352326 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352337 | orchestrator | 2026-03-28 05:56:38.352349 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 05:56:38.352360 | orchestrator | Saturday 28 March 2026 05:56:22 +0000 (0:00:01.581) 0:42:29.250 ******** 2026-03-28 05:56:38.352371 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352382 | orchestrator | 2026-03-28 05:56:38.352393 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 05:56:38.352404 | orchestrator | Saturday 28 March 2026 05:56:24 +0000 (0:00:01.211) 0:42:30.462 ******** 2026-03-28 05:56:38.352416 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352427 | orchestrator | 2026-03-28 05:56:38.352438 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 05:56:38.352449 | orchestrator | Saturday 28 March 2026 05:56:25 +0000 (0:00:01.155) 0:42:31.617 ******** 2026-03-28 05:56:38.352460 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352471 | orchestrator | 2026-03-28 05:56:38.352482 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 05:56:38.352493 | orchestrator | Saturday 28 March 2026 05:56:26 +0000 (0:00:01.162) 0:42:32.780 ******** 2026-03-28 05:56:38.352504 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352515 | orchestrator | 2026-03-28 05:56:38.352526 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 05:56:38.352537 | orchestrator | Saturday 28 March 2026 05:56:27 +0000 (0:00:01.532) 0:42:34.312 ******** 2026-03-28 05:56:38.352548 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352559 | orchestrator | 2026-03-28 05:56:38.352571 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 05:56:38.352581 | orchestrator | Saturday 28 March 2026 05:56:29 +0000 (0:00:01.554) 0:42:35.866 ******** 2026-03-28 05:56:38.352593 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352603 | orchestrator | 2026-03-28 05:56:38.352615 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 05:56:38.352626 | orchestrator | Saturday 28 March 2026 05:56:30 +0000 (0:00:00.795) 0:42:36.662 ******** 2026-03-28 05:56:38.352637 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352647 | orchestrator | 2026-03-28 05:56:38.352659 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 05:56:38.352670 | orchestrator | Saturday 28 March 2026 05:56:31 +0000 (0:00:00.796) 0:42:37.459 ******** 2026-03-28 05:56:38.352681 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352692 | orchestrator | 2026-03-28 05:56:38.352703 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 05:56:38.352714 | orchestrator | Saturday 28 March 2026 05:56:31 +0000 (0:00:00.811) 0:42:38.270 ******** 2026-03-28 05:56:38.352725 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352736 | orchestrator | 2026-03-28 05:56:38.352747 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 05:56:38.352765 | orchestrator | Saturday 28 March 2026 05:56:32 +0000 (0:00:00.804) 0:42:39.074 ******** 2026-03-28 05:56:38.352776 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352787 | orchestrator | 2026-03-28 05:56:38.352798 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 05:56:38.352809 | orchestrator | Saturday 28 March 2026 05:56:33 +0000 (0:00:00.807) 0:42:39.882 ******** 2026-03-28 05:56:38.352820 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352831 | orchestrator | 2026-03-28 05:56:38.352843 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 05:56:38.352853 | orchestrator | Saturday 28 March 2026 05:56:34 +0000 (0:00:00.766) 0:42:40.648 ******** 2026-03-28 05:56:38.352864 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352875 | orchestrator | 2026-03-28 05:56:38.352887 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 05:56:38.352898 | orchestrator | Saturday 28 March 2026 05:56:34 +0000 (0:00:00.771) 0:42:41.420 ******** 2026-03-28 05:56:38.352909 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:56:38.352920 | orchestrator | 2026-03-28 05:56:38.352931 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 05:56:38.352942 | orchestrator | Saturday 28 March 2026 05:56:35 +0000 (0:00:00.873) 0:42:42.294 ******** 2026-03-28 05:56:38.352953 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.352964 | orchestrator | 2026-03-28 05:56:38.352975 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 05:56:38.352986 | orchestrator | Saturday 28 March 2026 05:56:36 +0000 (0:00:00.916) 0:42:43.210 ******** 2026-03-28 05:56:38.352997 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:56:38.353008 | orchestrator | 2026-03-28 05:56:38.353019 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 05:56:38.353030 | orchestrator | Saturday 28 March 2026 05:56:37 +0000 (0:00:00.796) 0:42:44.007 ******** 2026-03-28 05:56:38.353048 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424676 | orchestrator | 2026-03-28 05:57:21.424787 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 05:57:21.424817 | orchestrator | Saturday 28 March 2026 05:56:38 +0000 (0:00:00.768) 0:42:44.776 ******** 2026-03-28 05:57:21.424829 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424840 | orchestrator | 2026-03-28 05:57:21.424850 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 05:57:21.424860 | orchestrator | Saturday 28 March 2026 05:56:39 +0000 (0:00:00.791) 0:42:45.568 ******** 2026-03-28 05:57:21.424870 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424879 | orchestrator | 2026-03-28 05:57:21.424889 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 05:57:21.424899 | orchestrator | Saturday 28 March 2026 05:56:39 +0000 (0:00:00.775) 0:42:46.343 ******** 2026-03-28 05:57:21.424908 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424918 | orchestrator | 2026-03-28 05:57:21.424928 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 05:57:21.424937 | orchestrator | Saturday 28 March 2026 05:56:40 +0000 (0:00:00.780) 0:42:47.124 ******** 2026-03-28 05:57:21.424947 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424957 | orchestrator | 2026-03-28 05:57:21.424966 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 05:57:21.424976 | orchestrator | Saturday 28 March 2026 05:56:41 +0000 (0:00:00.769) 0:42:47.894 ******** 2026-03-28 05:57:21.424986 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.424995 | orchestrator | 2026-03-28 05:57:21.425006 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 05:57:21.425016 | orchestrator | Saturday 28 March 2026 05:56:42 +0000 (0:00:00.786) 0:42:48.681 ******** 2026-03-28 05:57:21.425025 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425035 | orchestrator | 2026-03-28 05:57:21.425045 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 05:57:21.425087 | orchestrator | Saturday 28 March 2026 05:56:43 +0000 (0:00:00.777) 0:42:49.459 ******** 2026-03-28 05:57:21.425104 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425121 | orchestrator | 2026-03-28 05:57:21.425138 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 05:57:21.425154 | orchestrator | Saturday 28 March 2026 05:56:43 +0000 (0:00:00.781) 0:42:50.240 ******** 2026-03-28 05:57:21.425170 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425214 | orchestrator | 2026-03-28 05:57:21.425231 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 05:57:21.425247 | orchestrator | Saturday 28 March 2026 05:56:44 +0000 (0:00:00.835) 0:42:51.076 ******** 2026-03-28 05:57:21.425263 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425279 | orchestrator | 2026-03-28 05:57:21.425296 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 05:57:21.425312 | orchestrator | Saturday 28 March 2026 05:56:45 +0000 (0:00:00.775) 0:42:51.852 ******** 2026-03-28 05:57:21.425329 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425346 | orchestrator | 2026-03-28 05:57:21.425363 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 05:57:21.425381 | orchestrator | Saturday 28 March 2026 05:56:46 +0000 (0:00:00.797) 0:42:52.649 ******** 2026-03-28 05:57:21.425399 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425415 | orchestrator | 2026-03-28 05:57:21.425432 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 05:57:21.425448 | orchestrator | Saturday 28 March 2026 05:56:47 +0000 (0:00:00.992) 0:42:53.642 ******** 2026-03-28 05:57:21.425466 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.425484 | orchestrator | 2026-03-28 05:57:21.425500 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 05:57:21.425518 | orchestrator | Saturday 28 March 2026 05:56:48 +0000 (0:00:01.535) 0:42:55.178 ******** 2026-03-28 05:57:21.425537 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.425555 | orchestrator | 2026-03-28 05:57:21.425574 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 05:57:21.425590 | orchestrator | Saturday 28 March 2026 05:56:50 +0000 (0:00:01.851) 0:42:57.029 ******** 2026-03-28 05:57:21.425606 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-28 05:57:21.425624 | orchestrator | 2026-03-28 05:57:21.425640 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 05:57:21.425656 | orchestrator | Saturday 28 March 2026 05:56:51 +0000 (0:00:01.119) 0:42:58.149 ******** 2026-03-28 05:57:21.425673 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425689 | orchestrator | 2026-03-28 05:57:21.425706 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 05:57:21.425724 | orchestrator | Saturday 28 March 2026 05:56:52 +0000 (0:00:01.157) 0:42:59.307 ******** 2026-03-28 05:57:21.425743 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.425762 | orchestrator | 2026-03-28 05:57:21.425780 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 05:57:21.425797 | orchestrator | Saturday 28 March 2026 05:56:54 +0000 (0:00:01.132) 0:43:00.439 ******** 2026-03-28 05:57:21.425815 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 05:57:21.425833 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 05:57:21.425851 | orchestrator | 2026-03-28 05:57:21.425869 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 05:57:21.425888 | orchestrator | Saturday 28 March 2026 05:56:55 +0000 (0:00:01.845) 0:43:02.285 ******** 2026-03-28 05:57:21.425906 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.425925 | orchestrator | 2026-03-28 05:57:21.425943 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 05:57:21.425962 | orchestrator | Saturday 28 March 2026 05:56:57 +0000 (0:00:01.430) 0:43:03.716 ******** 2026-03-28 05:57:21.425997 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426092 | orchestrator | 2026-03-28 05:57:21.426142 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 05:57:21.426163 | orchestrator | Saturday 28 March 2026 05:56:58 +0000 (0:00:01.216) 0:43:04.932 ******** 2026-03-28 05:57:21.426221 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426241 | orchestrator | 2026-03-28 05:57:21.426259 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 05:57:21.426278 | orchestrator | Saturday 28 March 2026 05:56:59 +0000 (0:00:00.788) 0:43:05.721 ******** 2026-03-28 05:57:21.426296 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426314 | orchestrator | 2026-03-28 05:57:21.426333 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 05:57:21.426351 | orchestrator | Saturday 28 March 2026 05:57:00 +0000 (0:00:00.765) 0:43:06.486 ******** 2026-03-28 05:57:21.426368 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-28 05:57:21.426387 | orchestrator | 2026-03-28 05:57:21.426407 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 05:57:21.426426 | orchestrator | Saturday 28 March 2026 05:57:01 +0000 (0:00:01.266) 0:43:07.753 ******** 2026-03-28 05:57:21.426444 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.426462 | orchestrator | 2026-03-28 05:57:21.426480 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 05:57:21.426498 | orchestrator | Saturday 28 March 2026 05:57:03 +0000 (0:00:01.869) 0:43:09.622 ******** 2026-03-28 05:57:21.426518 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 05:57:21.426537 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 05:57:21.426555 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 05:57:21.426573 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426591 | orchestrator | 2026-03-28 05:57:21.426609 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 05:57:21.426628 | orchestrator | Saturday 28 March 2026 05:57:04 +0000 (0:00:01.153) 0:43:10.776 ******** 2026-03-28 05:57:21.426647 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426666 | orchestrator | 2026-03-28 05:57:21.426686 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 05:57:21.426705 | orchestrator | Saturday 28 March 2026 05:57:05 +0000 (0:00:01.213) 0:43:11.990 ******** 2026-03-28 05:57:21.426725 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426743 | orchestrator | 2026-03-28 05:57:21.426761 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 05:57:21.426779 | orchestrator | Saturday 28 March 2026 05:57:06 +0000 (0:00:01.193) 0:43:13.183 ******** 2026-03-28 05:57:21.426797 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426815 | orchestrator | 2026-03-28 05:57:21.426833 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 05:57:21.426851 | orchestrator | Saturday 28 March 2026 05:57:07 +0000 (0:00:01.201) 0:43:14.384 ******** 2026-03-28 05:57:21.426870 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426889 | orchestrator | 2026-03-28 05:57:21.426908 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 05:57:21.426926 | orchestrator | Saturday 28 March 2026 05:57:09 +0000 (0:00:01.156) 0:43:15.541 ******** 2026-03-28 05:57:21.426944 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.426962 | orchestrator | 2026-03-28 05:57:21.426980 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 05:57:21.426997 | orchestrator | Saturday 28 March 2026 05:57:09 +0000 (0:00:00.801) 0:43:16.342 ******** 2026-03-28 05:57:21.427015 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.427033 | orchestrator | 2026-03-28 05:57:21.427051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 05:57:21.427084 | orchestrator | Saturday 28 March 2026 05:57:12 +0000 (0:00:02.223) 0:43:18.565 ******** 2026-03-28 05:57:21.427101 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:57:21.427120 | orchestrator | 2026-03-28 05:57:21.427137 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 05:57:21.427155 | orchestrator | Saturday 28 March 2026 05:57:12 +0000 (0:00:00.849) 0:43:19.414 ******** 2026-03-28 05:57:21.427173 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-28 05:57:21.427216 | orchestrator | 2026-03-28 05:57:21.427235 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 05:57:21.427253 | orchestrator | Saturday 28 March 2026 05:57:14 +0000 (0:00:01.103) 0:43:20.518 ******** 2026-03-28 05:57:21.427270 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.427290 | orchestrator | 2026-03-28 05:57:21.427307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 05:57:21.427326 | orchestrator | Saturday 28 March 2026 05:57:15 +0000 (0:00:01.235) 0:43:21.754 ******** 2026-03-28 05:57:21.427344 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.427362 | orchestrator | 2026-03-28 05:57:21.427379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 05:57:21.427397 | orchestrator | Saturday 28 March 2026 05:57:16 +0000 (0:00:01.179) 0:43:22.933 ******** 2026-03-28 05:57:21.427414 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.427431 | orchestrator | 2026-03-28 05:57:21.427449 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 05:57:21.427467 | orchestrator | Saturday 28 March 2026 05:57:17 +0000 (0:00:01.273) 0:43:24.206 ******** 2026-03-28 05:57:21.427485 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.427503 | orchestrator | 2026-03-28 05:57:21.427522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 05:57:21.427539 | orchestrator | Saturday 28 March 2026 05:57:18 +0000 (0:00:01.187) 0:43:25.394 ******** 2026-03-28 05:57:21.427557 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:57:21.427575 | orchestrator | 2026-03-28 05:57:21.427593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 05:57:21.427611 | orchestrator | Saturday 28 March 2026 05:57:20 +0000 (0:00:01.253) 0:43:26.647 ******** 2026-03-28 05:57:21.427645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.188708 | orchestrator | 2026-03-28 05:58:04.188827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 05:58:04.188860 | orchestrator | Saturday 28 March 2026 05:57:21 +0000 (0:00:01.200) 0:43:27.847 ******** 2026-03-28 05:58:04.188874 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.188887 | orchestrator | 2026-03-28 05:58:04.188899 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 05:58:04.188911 | orchestrator | Saturday 28 March 2026 05:57:22 +0000 (0:00:01.151) 0:43:28.999 ******** 2026-03-28 05:58:04.188923 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.188935 | orchestrator | 2026-03-28 05:58:04.188947 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 05:58:04.188959 | orchestrator | Saturday 28 March 2026 05:57:23 +0000 (0:00:01.152) 0:43:30.152 ******** 2026-03-28 05:58:04.188971 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:04.188983 | orchestrator | 2026-03-28 05:58:04.188995 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 05:58:04.189006 | orchestrator | Saturday 28 March 2026 05:57:24 +0000 (0:00:00.825) 0:43:30.977 ******** 2026-03-28 05:58:04.189018 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-28 05:58:04.189031 | orchestrator | 2026-03-28 05:58:04.189043 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 05:58:04.189054 | orchestrator | Saturday 28 March 2026 05:57:25 +0000 (0:00:01.150) 0:43:32.128 ******** 2026-03-28 05:58:04.189088 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 05:58:04.189101 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 05:58:04.189113 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 05:58:04.189124 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 05:58:04.189135 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 05:58:04.189147 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 05:58:04.189189 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 05:58:04.189202 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 05:58:04.189213 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 05:58:04.189224 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 05:58:04.189235 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 05:58:04.189249 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 05:58:04.189262 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 05:58:04.189275 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 05:58:04.189290 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 05:58:04.189303 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 05:58:04.189316 | orchestrator | 2026-03-28 05:58:04.189329 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 05:58:04.189342 | orchestrator | Saturday 28 March 2026 05:57:31 +0000 (0:00:06.167) 0:43:38.296 ******** 2026-03-28 05:58:04.189355 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-28 05:58:04.189369 | orchestrator | 2026-03-28 05:58:04.189382 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 05:58:04.189395 | orchestrator | Saturday 28 March 2026 05:57:33 +0000 (0:00:01.301) 0:43:39.597 ******** 2026-03-28 05:58:04.189409 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 05:58:04.189424 | orchestrator | 2026-03-28 05:58:04.189437 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 05:58:04.189451 | orchestrator | Saturday 28 March 2026 05:57:34 +0000 (0:00:01.586) 0:43:41.184 ******** 2026-03-28 05:58:04.189464 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 05:58:04.189475 | orchestrator | 2026-03-28 05:58:04.189486 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 05:58:04.189497 | orchestrator | Saturday 28 March 2026 05:57:36 +0000 (0:00:01.694) 0:43:42.878 ******** 2026-03-28 05:58:04.189508 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189519 | orchestrator | 2026-03-28 05:58:04.189530 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 05:58:04.189541 | orchestrator | Saturday 28 March 2026 05:57:37 +0000 (0:00:00.783) 0:43:43.661 ******** 2026-03-28 05:58:04.189553 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189564 | orchestrator | 2026-03-28 05:58:04.189575 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 05:58:04.189586 | orchestrator | Saturday 28 March 2026 05:57:38 +0000 (0:00:00.774) 0:43:44.436 ******** 2026-03-28 05:58:04.189597 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189608 | orchestrator | 2026-03-28 05:58:04.189619 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 05:58:04.189630 | orchestrator | Saturday 28 March 2026 05:57:38 +0000 (0:00:00.773) 0:43:45.209 ******** 2026-03-28 05:58:04.189640 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189651 | orchestrator | 2026-03-28 05:58:04.189662 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 05:58:04.189682 | orchestrator | Saturday 28 March 2026 05:57:39 +0000 (0:00:00.813) 0:43:46.023 ******** 2026-03-28 05:58:04.189693 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189704 | orchestrator | 2026-03-28 05:58:04.189715 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 05:58:04.189727 | orchestrator | Saturday 28 March 2026 05:57:40 +0000 (0:00:00.759) 0:43:46.783 ******** 2026-03-28 05:58:04.189755 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189768 | orchestrator | 2026-03-28 05:58:04.189785 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 05:58:04.189796 | orchestrator | Saturday 28 March 2026 05:57:41 +0000 (0:00:00.763) 0:43:47.547 ******** 2026-03-28 05:58:04.189807 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189818 | orchestrator | 2026-03-28 05:58:04.189829 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 05:58:04.189840 | orchestrator | Saturday 28 March 2026 05:57:41 +0000 (0:00:00.772) 0:43:48.320 ******** 2026-03-28 05:58:04.189851 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189862 | orchestrator | 2026-03-28 05:58:04.189873 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 05:58:04.189884 | orchestrator | Saturday 28 March 2026 05:57:42 +0000 (0:00:00.786) 0:43:49.106 ******** 2026-03-28 05:58:04.189895 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189906 | orchestrator | 2026-03-28 05:58:04.189917 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 05:58:04.189929 | orchestrator | Saturday 28 March 2026 05:57:43 +0000 (0:00:00.780) 0:43:49.887 ******** 2026-03-28 05:58:04.189940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.189951 | orchestrator | 2026-03-28 05:58:04.189961 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 05:58:04.189972 | orchestrator | Saturday 28 March 2026 05:57:44 +0000 (0:00:00.774) 0:43:50.661 ******** 2026-03-28 05:58:04.189983 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:04.189994 | orchestrator | 2026-03-28 05:58:04.190005 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 05:58:04.190078 | orchestrator | Saturday 28 March 2026 05:57:45 +0000 (0:00:01.381) 0:43:52.043 ******** 2026-03-28 05:58:04.190091 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-28 05:58:04.190102 | orchestrator | 2026-03-28 05:58:04.190114 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 05:58:04.190125 | orchestrator | Saturday 28 March 2026 05:57:49 +0000 (0:00:04.189) 0:43:56.232 ******** 2026-03-28 05:58:04.190136 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 05:58:04.190147 | orchestrator | 2026-03-28 05:58:04.190184 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 05:58:04.190196 | orchestrator | Saturday 28 March 2026 05:57:50 +0000 (0:00:00.910) 0:43:57.143 ******** 2026-03-28 05:58:04.190210 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-28 05:58:04.190225 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-28 05:58:04.190238 | orchestrator | 2026-03-28 05:58:04.190249 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 05:58:04.190260 | orchestrator | Saturday 28 March 2026 05:57:58 +0000 (0:00:07.529) 0:44:04.672 ******** 2026-03-28 05:58:04.190279 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.190290 | orchestrator | 2026-03-28 05:58:04.190301 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 05:58:04.190312 | orchestrator | Saturday 28 March 2026 05:57:59 +0000 (0:00:00.796) 0:44:05.469 ******** 2026-03-28 05:58:04.190323 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.190334 | orchestrator | 2026-03-28 05:58:04.190345 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 05:58:04.190356 | orchestrator | Saturday 28 March 2026 05:57:59 +0000 (0:00:00.765) 0:44:06.235 ******** 2026-03-28 05:58:04.190367 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.190377 | orchestrator | 2026-03-28 05:58:04.190388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 05:58:04.190399 | orchestrator | Saturday 28 March 2026 05:58:00 +0000 (0:00:00.813) 0:44:07.049 ******** 2026-03-28 05:58:04.190410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.190421 | orchestrator | 2026-03-28 05:58:04.190432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 05:58:04.190443 | orchestrator | Saturday 28 March 2026 05:58:01 +0000 (0:00:00.794) 0:44:07.843 ******** 2026-03-28 05:58:04.190454 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:04.190465 | orchestrator | 2026-03-28 05:58:04.190476 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 05:58:04.190487 | orchestrator | Saturday 28 March 2026 05:58:02 +0000 (0:00:00.838) 0:44:08.682 ******** 2026-03-28 05:58:04.190498 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:04.190509 | orchestrator | 2026-03-28 05:58:04.190519 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 05:58:04.190530 | orchestrator | Saturday 28 March 2026 05:58:03 +0000 (0:00:00.878) 0:44:09.561 ******** 2026-03-28 05:58:04.190541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:58:04.190553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:58:04.190572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:58:52.794262 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.794381 | orchestrator | 2026-03-28 05:58:52.794416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 05:58:52.794430 | orchestrator | Saturday 28 March 2026 05:58:04 +0000 (0:00:01.047) 0:44:10.608 ******** 2026-03-28 05:58:52.794442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:58:52.794454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:58:52.794465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:58:52.794476 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.794487 | orchestrator | 2026-03-28 05:58:52.794499 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 05:58:52.794510 | orchestrator | Saturday 28 March 2026 05:58:05 +0000 (0:00:01.423) 0:44:12.031 ******** 2026-03-28 05:58:52.794521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 05:58:52.794532 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 05:58:52.794543 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 05:58:52.794555 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.794566 | orchestrator | 2026-03-28 05:58:52.794577 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 05:58:52.794588 | orchestrator | Saturday 28 March 2026 05:58:07 +0000 (0:00:01.454) 0:44:13.486 ******** 2026-03-28 05:58:52.794600 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.794612 | orchestrator | 2026-03-28 05:58:52.794623 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 05:58:52.794634 | orchestrator | Saturday 28 March 2026 05:58:07 +0000 (0:00:00.893) 0:44:14.380 ******** 2026-03-28 05:58:52.794666 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 05:58:52.794678 | orchestrator | 2026-03-28 05:58:52.794689 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 05:58:52.794700 | orchestrator | Saturday 28 March 2026 05:58:08 +0000 (0:00:01.000) 0:44:15.380 ******** 2026-03-28 05:58:52.794712 | orchestrator | changed: [testbed-node-4] 2026-03-28 05:58:52.794722 | orchestrator | 2026-03-28 05:58:52.794734 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 05:58:52.794745 | orchestrator | Saturday 28 March 2026 05:58:10 +0000 (0:00:01.393) 0:44:16.774 ******** 2026-03-28 05:58:52.794756 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.794767 | orchestrator | 2026-03-28 05:58:52.794778 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 05:58:52.794789 | orchestrator | Saturday 28 March 2026 05:58:11 +0000 (0:00:00.785) 0:44:17.559 ******** 2026-03-28 05:58:52.794800 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:58:52.794812 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:58:52.794823 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:58:52.794834 | orchestrator | 2026-03-28 05:58:52.794845 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 05:58:52.794856 | orchestrator | Saturday 28 March 2026 05:58:12 +0000 (0:00:01.370) 0:44:18.930 ******** 2026-03-28 05:58:52.794867 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-28 05:58:52.794878 | orchestrator | 2026-03-28 05:58:52.794889 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 05:58:52.794900 | orchestrator | Saturday 28 March 2026 05:58:13 +0000 (0:00:01.118) 0:44:20.048 ******** 2026-03-28 05:58:52.794911 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.794925 | orchestrator | 2026-03-28 05:58:52.794944 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 05:58:52.794963 | orchestrator | Saturday 28 March 2026 05:58:14 +0000 (0:00:01.175) 0:44:21.223 ******** 2026-03-28 05:58:52.794982 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.795001 | orchestrator | 2026-03-28 05:58:52.795019 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 05:58:52.795037 | orchestrator | Saturday 28 March 2026 05:58:15 +0000 (0:00:01.185) 0:44:22.409 ******** 2026-03-28 05:58:52.795054 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.795071 | orchestrator | 2026-03-28 05:58:52.795088 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 05:58:52.795106 | orchestrator | Saturday 28 March 2026 05:58:17 +0000 (0:00:01.511) 0:44:23.920 ******** 2026-03-28 05:58:52.795124 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.795171 | orchestrator | 2026-03-28 05:58:52.795190 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 05:58:52.795209 | orchestrator | Saturday 28 March 2026 05:58:18 +0000 (0:00:01.265) 0:44:25.185 ******** 2026-03-28 05:58:52.795227 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 05:58:52.795248 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 05:58:52.795267 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 05:58:52.795329 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 05:58:52.795353 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 05:58:52.795364 | orchestrator | 2026-03-28 05:58:52.795375 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 05:58:52.795387 | orchestrator | Saturday 28 March 2026 05:58:21 +0000 (0:00:02.611) 0:44:27.797 ******** 2026-03-28 05:58:52.795398 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.795420 | orchestrator | 2026-03-28 05:58:52.795431 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 05:58:52.795443 | orchestrator | Saturday 28 March 2026 05:58:22 +0000 (0:00:00.817) 0:44:28.614 ******** 2026-03-28 05:58:52.795475 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-28 05:58:52.795487 | orchestrator | 2026-03-28 05:58:52.795507 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 05:58:52.795518 | orchestrator | Saturday 28 March 2026 05:58:23 +0000 (0:00:01.137) 0:44:29.752 ******** 2026-03-28 05:58:52.795529 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 05:58:52.795540 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-28 05:58:52.795551 | orchestrator | 2026-03-28 05:58:52.795562 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 05:58:52.795572 | orchestrator | Saturday 28 March 2026 05:58:25 +0000 (0:00:01.920) 0:44:31.673 ******** 2026-03-28 05:58:52.795583 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 05:58:52.795594 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 05:58:52.795605 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 05:58:52.795616 | orchestrator | 2026-03-28 05:58:52.795626 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 05:58:52.795637 | orchestrator | Saturday 28 March 2026 05:58:28 +0000 (0:00:03.158) 0:44:34.831 ******** 2026-03-28 05:58:52.795648 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-28 05:58:52.795659 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 05:58:52.795670 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.795681 | orchestrator | 2026-03-28 05:58:52.795691 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 05:58:52.795707 | orchestrator | Saturday 28 March 2026 05:58:30 +0000 (0:00:01.612) 0:44:36.444 ******** 2026-03-28 05:58:52.795727 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.795744 | orchestrator | 2026-03-28 05:58:52.795763 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 05:58:52.795781 | orchestrator | Saturday 28 March 2026 05:58:30 +0000 (0:00:00.875) 0:44:37.320 ******** 2026-03-28 05:58:52.795797 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.795814 | orchestrator | 2026-03-28 05:58:52.795832 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 05:58:52.795850 | orchestrator | Saturday 28 March 2026 05:58:31 +0000 (0:00:00.785) 0:44:38.105 ******** 2026-03-28 05:58:52.795869 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.795890 | orchestrator | 2026-03-28 05:58:52.795907 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 05:58:52.795926 | orchestrator | Saturday 28 March 2026 05:58:32 +0000 (0:00:00.775) 0:44:38.882 ******** 2026-03-28 05:58:52.795938 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-28 05:58:52.795949 | orchestrator | 2026-03-28 05:58:52.795960 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 05:58:52.795971 | orchestrator | Saturday 28 March 2026 05:58:33 +0000 (0:00:01.134) 0:44:40.016 ******** 2026-03-28 05:58:52.795982 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.795993 | orchestrator | 2026-03-28 05:58:52.796004 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 05:58:52.796015 | orchestrator | Saturday 28 March 2026 05:58:35 +0000 (0:00:01.454) 0:44:41.470 ******** 2026-03-28 05:58:52.796025 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.796036 | orchestrator | 2026-03-28 05:58:52.796047 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 05:58:52.796058 | orchestrator | Saturday 28 March 2026 05:58:38 +0000 (0:00:03.434) 0:44:44.905 ******** 2026-03-28 05:58:52.796069 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-28 05:58:52.796091 | orchestrator | 2026-03-28 05:58:52.796103 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 05:58:52.796113 | orchestrator | Saturday 28 March 2026 05:58:39 +0000 (0:00:01.147) 0:44:46.052 ******** 2026-03-28 05:58:52.796124 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.796190 | orchestrator | 2026-03-28 05:58:52.796202 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 05:58:52.796213 | orchestrator | Saturday 28 March 2026 05:58:41 +0000 (0:00:01.984) 0:44:48.037 ******** 2026-03-28 05:58:52.796224 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.796235 | orchestrator | 2026-03-28 05:58:52.796246 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 05:58:52.796256 | orchestrator | Saturday 28 March 2026 05:58:43 +0000 (0:00:01.927) 0:44:49.964 ******** 2026-03-28 05:58:52.796267 | orchestrator | ok: [testbed-node-4] 2026-03-28 05:58:52.796278 | orchestrator | 2026-03-28 05:58:52.796289 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 05:58:52.796300 | orchestrator | Saturday 28 March 2026 05:58:45 +0000 (0:00:02.159) 0:44:52.124 ******** 2026-03-28 05:58:52.796311 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.796322 | orchestrator | 2026-03-28 05:58:52.796333 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 05:58:52.796344 | orchestrator | Saturday 28 March 2026 05:58:46 +0000 (0:00:01.175) 0:44:53.299 ******** 2026-03-28 05:58:52.796355 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:58:52.796365 | orchestrator | 2026-03-28 05:58:52.796376 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 05:58:52.796387 | orchestrator | Saturday 28 March 2026 05:58:48 +0000 (0:00:01.151) 0:44:54.451 ******** 2026-03-28 05:58:52.796398 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-28 05:58:52.796409 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-28 05:58:52.796420 | orchestrator | 2026-03-28 05:58:52.796431 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 05:58:52.796442 | orchestrator | Saturday 28 March 2026 05:58:49 +0000 (0:00:01.849) 0:44:56.300 ******** 2026-03-28 05:58:52.796452 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-28 05:58:52.796464 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-28 05:58:52.796474 | orchestrator | 2026-03-28 05:58:52.796485 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 05:58:52.796506 | orchestrator | Saturday 28 March 2026 05:58:52 +0000 (0:00:02.914) 0:44:59.215 ******** 2026-03-28 05:59:44.310229 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 05:59:44.310361 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-28 05:59:44.310389 | orchestrator | 2026-03-28 05:59:44.310409 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 05:59:44.310428 | orchestrator | Saturday 28 March 2026 05:58:56 +0000 (0:00:04.133) 0:45:03.348 ******** 2026-03-28 05:59:44.310446 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.310465 | orchestrator | 2026-03-28 05:59:44.310483 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 05:59:44.310500 | orchestrator | Saturday 28 March 2026 05:58:57 +0000 (0:00:00.902) 0:45:04.251 ******** 2026-03-28 05:59:44.310521 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.310538 | orchestrator | 2026-03-28 05:59:44.310556 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 05:59:44.310572 | orchestrator | Saturday 28 March 2026 05:58:58 +0000 (0:00:00.874) 0:45:05.125 ******** 2026-03-28 05:59:44.310591 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.310610 | orchestrator | 2026-03-28 05:59:44.310630 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-28 05:59:44.310650 | orchestrator | Saturday 28 March 2026 05:59:00 +0000 (0:00:01.505) 0:45:06.631 ******** 2026-03-28 05:59:44.310670 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.310689 | orchestrator | 2026-03-28 05:59:44.310740 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-28 05:59:44.310763 | orchestrator | Saturday 28 March 2026 05:59:00 +0000 (0:00:00.779) 0:45:07.410 ******** 2026-03-28 05:59:44.310781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.310798 | orchestrator | 2026-03-28 05:59:44.310816 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-28 05:59:44.310834 | orchestrator | Saturday 28 March 2026 05:59:01 +0000 (0:00:00.749) 0:45:08.160 ******** 2026-03-28 05:59:44.310852 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-28 05:59:44.310873 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-28 05:59:44.310891 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-28 05:59:44.310910 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-28 05:59:44.310927 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 05:59:44.310946 | orchestrator | 2026-03-28 05:59:44.310965 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 05:59:44.310983 | orchestrator | Saturday 28 March 2026 05:59:15 +0000 (0:00:13.935) 0:45:22.096 ******** 2026-03-28 05:59:44.311001 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311018 | orchestrator | 2026-03-28 05:59:44.311034 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 05:59:44.311052 | orchestrator | Saturday 28 March 2026 05:59:16 +0000 (0:00:00.804) 0:45:22.900 ******** 2026-03-28 05:59:44.311070 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311089 | orchestrator | 2026-03-28 05:59:44.311142 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 05:59:44.311163 | orchestrator | Saturday 28 March 2026 05:59:17 +0000 (0:00:00.854) 0:45:23.754 ******** 2026-03-28 05:59:44.311182 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311201 | orchestrator | 2026-03-28 05:59:44.311220 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 05:59:44.311239 | orchestrator | Saturday 28 March 2026 05:59:18 +0000 (0:00:00.762) 0:45:24.517 ******** 2026-03-28 05:59:44.311259 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311277 | orchestrator | 2026-03-28 05:59:44.311297 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 05:59:44.311317 | orchestrator | Saturday 28 March 2026 05:59:18 +0000 (0:00:00.792) 0:45:25.310 ******** 2026-03-28 05:59:44.311337 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311357 | orchestrator | 2026-03-28 05:59:44.311375 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 05:59:44.311394 | orchestrator | Saturday 28 March 2026 05:59:19 +0000 (0:00:00.794) 0:45:26.105 ******** 2026-03-28 05:59:44.311414 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311434 | orchestrator | 2026-03-28 05:59:44.311453 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 05:59:44.311471 | orchestrator | Saturday 28 March 2026 05:59:20 +0000 (0:00:00.842) 0:45:26.947 ******** 2026-03-28 05:59:44.311489 | orchestrator | skipping: [testbed-node-4] 2026-03-28 05:59:44.311508 | orchestrator | 2026-03-28 05:59:44.311527 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-28 05:59:44.311545 | orchestrator | 2026-03-28 05:59:44.311562 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 05:59:44.311580 | orchestrator | Saturday 28 March 2026 05:59:21 +0000 (0:00:00.991) 0:45:27.938 ******** 2026-03-28 05:59:44.311597 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-28 05:59:44.311615 | orchestrator | 2026-03-28 05:59:44.311633 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 05:59:44.311653 | orchestrator | Saturday 28 March 2026 05:59:22 +0000 (0:00:01.338) 0:45:29.277 ******** 2026-03-28 05:59:44.311689 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.311709 | orchestrator | 2026-03-28 05:59:44.311727 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 05:59:44.311746 | orchestrator | Saturday 28 March 2026 05:59:24 +0000 (0:00:01.466) 0:45:30.744 ******** 2026-03-28 05:59:44.311764 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.311780 | orchestrator | 2026-03-28 05:59:44.311797 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 05:59:44.311815 | orchestrator | Saturday 28 March 2026 05:59:25 +0000 (0:00:01.221) 0:45:31.966 ******** 2026-03-28 05:59:44.311862 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.311882 | orchestrator | 2026-03-28 05:59:44.311915 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 05:59:44.311935 | orchestrator | Saturday 28 March 2026 05:59:26 +0000 (0:00:01.412) 0:45:33.378 ******** 2026-03-28 05:59:44.311954 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.311972 | orchestrator | 2026-03-28 05:59:44.311991 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 05:59:44.312009 | orchestrator | Saturday 28 March 2026 05:59:28 +0000 (0:00:01.259) 0:45:34.638 ******** 2026-03-28 05:59:44.312028 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.312048 | orchestrator | 2026-03-28 05:59:44.312067 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 05:59:44.312085 | orchestrator | Saturday 28 March 2026 05:59:29 +0000 (0:00:01.182) 0:45:35.821 ******** 2026-03-28 05:59:44.312129 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.312150 | orchestrator | 2026-03-28 05:59:44.312172 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 05:59:44.312193 | orchestrator | Saturday 28 March 2026 05:59:30 +0000 (0:00:01.242) 0:45:37.063 ******** 2026-03-28 05:59:44.312212 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:59:44.312231 | orchestrator | 2026-03-28 05:59:44.312251 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 05:59:44.312271 | orchestrator | Saturday 28 March 2026 05:59:31 +0000 (0:00:01.126) 0:45:38.190 ******** 2026-03-28 05:59:44.312289 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.312308 | orchestrator | 2026-03-28 05:59:44.312327 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 05:59:44.312347 | orchestrator | Saturday 28 March 2026 05:59:32 +0000 (0:00:01.166) 0:45:39.357 ******** 2026-03-28 05:59:44.312366 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:59:44.312385 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:59:44.312404 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:59:44.312423 | orchestrator | 2026-03-28 05:59:44.312443 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 05:59:44.312462 | orchestrator | Saturday 28 March 2026 05:59:35 +0000 (0:00:02.121) 0:45:41.478 ******** 2026-03-28 05:59:44.312479 | orchestrator | ok: [testbed-node-5] 2026-03-28 05:59:44.312498 | orchestrator | 2026-03-28 05:59:44.312516 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 05:59:44.312535 | orchestrator | Saturday 28 March 2026 05:59:36 +0000 (0:00:01.281) 0:45:42.760 ******** 2026-03-28 05:59:44.312553 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 05:59:44.312572 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 05:59:44.312591 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 05:59:44.312610 | orchestrator | 2026-03-28 05:59:44.312630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 05:59:44.312650 | orchestrator | Saturday 28 March 2026 05:59:39 +0000 (0:00:03.279) 0:45:46.039 ******** 2026-03-28 05:59:44.312689 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 05:59:44.312710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 05:59:44.312729 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 05:59:44.312746 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:59:44.312764 | orchestrator | 2026-03-28 05:59:44.312782 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 05:59:44.312800 | orchestrator | Saturday 28 March 2026 05:59:41 +0000 (0:00:01.861) 0:45:47.901 ******** 2026-03-28 05:59:44.312823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 05:59:44.312846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 05:59:44.312869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 05:59:44.312887 | orchestrator | skipping: [testbed-node-5] 2026-03-28 05:59:44.312908 | orchestrator | 2026-03-28 05:59:44.312928 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 05:59:44.312948 | orchestrator | Saturday 28 March 2026 05:59:43 +0000 (0:00:01.663) 0:45:49.565 ******** 2026-03-28 05:59:44.312969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 05:59:44.313026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:03.633909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:03.634066 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634084 | orchestrator | 2026-03-28 06:00:03.634138 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:00:03.634160 | orchestrator | Saturday 28 March 2026 05:59:44 +0000 (0:00:01.167) 0:45:50.733 ******** 2026-03-28 06:00:03.634181 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 05:59:37.257455', 'end': '2026-03-28 05:59:37.314416', 'delta': '0:00:00.056961', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:00:03.634222 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 05:59:37.842022', 'end': '2026-03-28 05:59:37.882184', 'delta': '0:00:00.040162', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:00:03.634233 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 05:59:38.414142', 'end': '2026-03-28 05:59:38.474848', 'delta': '0:00:00.060706', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:00:03.634244 | orchestrator | 2026-03-28 06:00:03.634254 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:00:03.634264 | orchestrator | Saturday 28 March 2026 05:59:45 +0000 (0:00:01.262) 0:45:51.995 ******** 2026-03-28 06:00:03.634274 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.634285 | orchestrator | 2026-03-28 06:00:03.634295 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:00:03.634305 | orchestrator | Saturday 28 March 2026 05:59:46 +0000 (0:00:01.290) 0:45:53.286 ******** 2026-03-28 06:00:03.634315 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634325 | orchestrator | 2026-03-28 06:00:03.634335 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:00:03.634348 | orchestrator | Saturday 28 March 2026 05:59:48 +0000 (0:00:01.438) 0:45:54.724 ******** 2026-03-28 06:00:03.634364 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.634380 | orchestrator | 2026-03-28 06:00:03.634395 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:00:03.634410 | orchestrator | Saturday 28 March 2026 05:59:49 +0000 (0:00:01.180) 0:45:55.905 ******** 2026-03-28 06:00:03.634426 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:00:03.634443 | orchestrator | 2026-03-28 06:00:03.634460 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:00:03.634498 | orchestrator | Saturday 28 March 2026 05:59:51 +0000 (0:00:02.041) 0:45:57.946 ******** 2026-03-28 06:00:03.634514 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.634527 | orchestrator | 2026-03-28 06:00:03.634539 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:00:03.634550 | orchestrator | Saturday 28 March 2026 05:59:52 +0000 (0:00:01.128) 0:45:59.075 ******** 2026-03-28 06:00:03.634580 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634592 | orchestrator | 2026-03-28 06:00:03.634609 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:00:03.634626 | orchestrator | Saturday 28 March 2026 05:59:53 +0000 (0:00:01.124) 0:46:00.200 ******** 2026-03-28 06:00:03.634642 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634658 | orchestrator | 2026-03-28 06:00:03.634675 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:00:03.634691 | orchestrator | Saturday 28 March 2026 05:59:55 +0000 (0:00:01.286) 0:46:01.487 ******** 2026-03-28 06:00:03.634721 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634738 | orchestrator | 2026-03-28 06:00:03.634755 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:00:03.634772 | orchestrator | Saturday 28 March 2026 05:59:56 +0000 (0:00:01.203) 0:46:02.691 ******** 2026-03-28 06:00:03.634790 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634805 | orchestrator | 2026-03-28 06:00:03.634823 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:00:03.634834 | orchestrator | Saturday 28 March 2026 05:59:57 +0000 (0:00:01.150) 0:46:03.841 ******** 2026-03-28 06:00:03.634843 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.634853 | orchestrator | 2026-03-28 06:00:03.634863 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:00:03.634873 | orchestrator | Saturday 28 March 2026 05:59:58 +0000 (0:00:01.247) 0:46:05.089 ******** 2026-03-28 06:00:03.634883 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634893 | orchestrator | 2026-03-28 06:00:03.634903 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:00:03.634913 | orchestrator | Saturday 28 March 2026 05:59:59 +0000 (0:00:01.168) 0:46:06.257 ******** 2026-03-28 06:00:03.634922 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.634932 | orchestrator | 2026-03-28 06:00:03.634942 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:00:03.634952 | orchestrator | Saturday 28 March 2026 06:00:01 +0000 (0:00:01.214) 0:46:07.472 ******** 2026-03-28 06:00:03.634962 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:03.634972 | orchestrator | 2026-03-28 06:00:03.634982 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:00:03.634992 | orchestrator | Saturday 28 March 2026 06:00:02 +0000 (0:00:01.123) 0:46:08.595 ******** 2026-03-28 06:00:03.635002 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:03.635012 | orchestrator | 2026-03-28 06:00:03.635022 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:00:03.635032 | orchestrator | Saturday 28 March 2026 06:00:03 +0000 (0:00:01.224) 0:46:09.820 ******** 2026-03-28 06:00:03.635042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:03.635054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}})  2026-03-28 06:00:03.635066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:00:03.635131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}})  2026-03-28 06:00:04.760172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:00:04.760337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}})  2026-03-28 06:00:04.760438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}})  2026-03-28 06:00:04.760450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:00:04.760476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:00:04.760516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:00:04.989038 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:04.989213 | orchestrator | 2026-03-28 06:00:04.989231 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:00:04.989243 | orchestrator | Saturday 28 March 2026 06:00:04 +0000 (0:00:01.366) 0:46:11.187 ******** 2026-03-28 06:00:04.989256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989271 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:04.989450 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:00:18.075906 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:18.075925 | orchestrator | 2026-03-28 06:00:18.075936 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:00:18.075948 | orchestrator | Saturday 28 March 2026 06:00:06 +0000 (0:00:01.448) 0:46:12.636 ******** 2026-03-28 06:00:18.075958 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:18.075969 | orchestrator | 2026-03-28 06:00:18.075979 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:00:18.075989 | orchestrator | Saturday 28 March 2026 06:00:07 +0000 (0:00:01.460) 0:46:14.096 ******** 2026-03-28 06:00:18.075999 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:18.076009 | orchestrator | 2026-03-28 06:00:18.076018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:00:18.076028 | orchestrator | Saturday 28 March 2026 06:00:08 +0000 (0:00:01.081) 0:46:15.177 ******** 2026-03-28 06:00:18.076038 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:00:18.076048 | orchestrator | 2026-03-28 06:00:18.076058 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:00:18.076068 | orchestrator | Saturday 28 March 2026 06:00:10 +0000 (0:00:01.425) 0:46:16.603 ******** 2026-03-28 06:00:18.076079 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:18.076144 | orchestrator | 2026-03-28 06:00:18.076156 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:00:18.076166 | orchestrator | Saturday 28 March 2026 06:00:11 +0000 (0:00:01.103) 0:46:17.706 ******** 2026-03-28 06:00:18.076176 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:18.076186 | orchestrator | 2026-03-28 06:00:18.076198 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:00:18.076209 | orchestrator | Saturday 28 March 2026 06:00:12 +0000 (0:00:01.205) 0:46:18.912 ******** 2026-03-28 06:00:18.076221 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:18.076232 | orchestrator | 2026-03-28 06:00:18.076244 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:00:18.076261 | orchestrator | Saturday 28 March 2026 06:00:13 +0000 (0:00:01.176) 0:46:20.089 ******** 2026-03-28 06:00:18.076274 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 06:00:18.076287 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 06:00:18.076298 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 06:00:18.076309 | orchestrator | 2026-03-28 06:00:18.076321 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:00:18.076333 | orchestrator | Saturday 28 March 2026 06:00:15 +0000 (0:00:01.978) 0:46:22.067 ******** 2026-03-28 06:00:18.076345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 06:00:18.076358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 06:00:18.076369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 06:00:18.076380 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:00:18.076392 | orchestrator | 2026-03-28 06:00:18.076403 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:00:18.076415 | orchestrator | Saturday 28 March 2026 06:00:16 +0000 (0:00:01.218) 0:46:23.286 ******** 2026-03-28 06:00:18.076428 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-28 06:00:18.076440 | orchestrator | 2026-03-28 06:00:18.076459 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:01:00.888226 | orchestrator | Saturday 28 March 2026 06:00:18 +0000 (0:00:01.211) 0:46:24.498 ******** 2026-03-28 06:01:00.888340 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888355 | orchestrator | 2026-03-28 06:01:00.888367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:01:00.888378 | orchestrator | Saturday 28 March 2026 06:00:19 +0000 (0:00:01.180) 0:46:25.679 ******** 2026-03-28 06:01:00.888388 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888398 | orchestrator | 2026-03-28 06:01:00.888429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:01:00.888440 | orchestrator | Saturday 28 March 2026 06:00:20 +0000 (0:00:01.225) 0:46:26.904 ******** 2026-03-28 06:01:00.888450 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888459 | orchestrator | 2026-03-28 06:01:00.888469 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:01:00.888480 | orchestrator | Saturday 28 March 2026 06:00:21 +0000 (0:00:01.159) 0:46:28.064 ******** 2026-03-28 06:01:00.888490 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.888500 | orchestrator | 2026-03-28 06:01:00.888510 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:01:00.888520 | orchestrator | Saturday 28 March 2026 06:00:22 +0000 (0:00:01.227) 0:46:29.292 ******** 2026-03-28 06:01:00.888530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:01:00.888540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:01:00.888550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:01:00.888560 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888570 | orchestrator | 2026-03-28 06:01:00.888580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:01:00.888589 | orchestrator | Saturday 28 March 2026 06:00:24 +0000 (0:00:01.420) 0:46:30.712 ******** 2026-03-28 06:01:00.888600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:01:00.888610 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:01:00.888619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:01:00.888629 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888639 | orchestrator | 2026-03-28 06:01:00.888648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:01:00.888658 | orchestrator | Saturday 28 March 2026 06:00:25 +0000 (0:00:01.414) 0:46:32.127 ******** 2026-03-28 06:01:00.888668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:01:00.888677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:01:00.888687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:01:00.888697 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.888707 | orchestrator | 2026-03-28 06:01:00.888716 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:01:00.888729 | orchestrator | Saturday 28 March 2026 06:00:27 +0000 (0:00:01.413) 0:46:33.541 ******** 2026-03-28 06:01:00.888740 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.888752 | orchestrator | 2026-03-28 06:01:00.888765 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:01:00.888776 | orchestrator | Saturday 28 March 2026 06:00:28 +0000 (0:00:01.240) 0:46:34.782 ******** 2026-03-28 06:01:00.888788 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:01:00.888799 | orchestrator | 2026-03-28 06:01:00.888810 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:01:00.888822 | orchestrator | Saturday 28 March 2026 06:00:30 +0000 (0:00:01.763) 0:46:36.545 ******** 2026-03-28 06:01:00.888834 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:01:00.888846 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:01:00.888857 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:01:00.888869 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:01:00.888896 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:01:00.888916 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:01:00.888929 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:01:00.888941 | orchestrator | 2026-03-28 06:01:00.888972 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:01:00.888982 | orchestrator | Saturday 28 March 2026 06:00:32 +0000 (0:00:02.312) 0:46:38.858 ******** 2026-03-28 06:01:00.888993 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:01:00.889002 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:01:00.889012 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:01:00.889022 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:01:00.889032 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:01:00.889041 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:01:00.889051 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:01:00.889061 | orchestrator | 2026-03-28 06:01:00.889091 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-28 06:01:00.889101 | orchestrator | Saturday 28 March 2026 06:00:34 +0000 (0:00:02.313) 0:46:41.171 ******** 2026-03-28 06:01:00.889111 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889121 | orchestrator | 2026-03-28 06:01:00.889131 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-28 06:01:00.889154 | orchestrator | Saturday 28 March 2026 06:00:35 +0000 (0:00:01.113) 0:46:42.285 ******** 2026-03-28 06:01:00.889164 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889174 | orchestrator | 2026-03-28 06:01:00.889184 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-28 06:01:00.889193 | orchestrator | Saturday 28 March 2026 06:00:36 +0000 (0:00:00.784) 0:46:43.069 ******** 2026-03-28 06:01:00.889203 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889212 | orchestrator | 2026-03-28 06:01:00.889222 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-28 06:01:00.889232 | orchestrator | Saturday 28 March 2026 06:00:37 +0000 (0:00:00.981) 0:46:44.051 ******** 2026-03-28 06:01:00.889241 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 06:01:00.889251 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 06:01:00.889261 | orchestrator | 2026-03-28 06:01:00.889270 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:01:00.889280 | orchestrator | Saturday 28 March 2026 06:00:41 +0000 (0:00:03.632) 0:46:47.683 ******** 2026-03-28 06:01:00.889289 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-28 06:01:00.889299 | orchestrator | 2026-03-28 06:01:00.889309 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:01:00.889318 | orchestrator | Saturday 28 March 2026 06:00:42 +0000 (0:00:01.143) 0:46:48.827 ******** 2026-03-28 06:01:00.889328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-28 06:01:00.889337 | orchestrator | 2026-03-28 06:01:00.889347 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:01:00.889357 | orchestrator | Saturday 28 March 2026 06:00:43 +0000 (0:00:01.155) 0:46:49.983 ******** 2026-03-28 06:01:00.889366 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889376 | orchestrator | 2026-03-28 06:01:00.889385 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:01:00.889395 | orchestrator | Saturday 28 March 2026 06:00:44 +0000 (0:00:01.127) 0:46:51.111 ******** 2026-03-28 06:01:00.889405 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889414 | orchestrator | 2026-03-28 06:01:00.889424 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:01:00.889434 | orchestrator | Saturday 28 March 2026 06:00:46 +0000 (0:00:01.530) 0:46:52.641 ******** 2026-03-28 06:01:00.889443 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889453 | orchestrator | 2026-03-28 06:01:00.889469 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:01:00.889478 | orchestrator | Saturday 28 March 2026 06:00:47 +0000 (0:00:01.579) 0:46:54.221 ******** 2026-03-28 06:01:00.889488 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889497 | orchestrator | 2026-03-28 06:01:00.889507 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:01:00.889517 | orchestrator | Saturday 28 March 2026 06:00:49 +0000 (0:00:01.588) 0:46:55.810 ******** 2026-03-28 06:01:00.889526 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889536 | orchestrator | 2026-03-28 06:01:00.889546 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:01:00.889555 | orchestrator | Saturday 28 March 2026 06:00:50 +0000 (0:00:01.162) 0:46:56.972 ******** 2026-03-28 06:01:00.889565 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889575 | orchestrator | 2026-03-28 06:01:00.889584 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:01:00.889594 | orchestrator | Saturday 28 March 2026 06:00:51 +0000 (0:00:01.168) 0:46:58.140 ******** 2026-03-28 06:01:00.889603 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889613 | orchestrator | 2026-03-28 06:01:00.889623 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:01:00.889632 | orchestrator | Saturday 28 March 2026 06:00:52 +0000 (0:00:01.132) 0:46:59.273 ******** 2026-03-28 06:01:00.889642 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889651 | orchestrator | 2026-03-28 06:01:00.889661 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:01:00.889670 | orchestrator | Saturday 28 March 2026 06:00:54 +0000 (0:00:01.544) 0:47:00.817 ******** 2026-03-28 06:01:00.889680 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889689 | orchestrator | 2026-03-28 06:01:00.889699 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:01:00.889708 | orchestrator | Saturday 28 March 2026 06:00:55 +0000 (0:00:01.600) 0:47:02.417 ******** 2026-03-28 06:01:00.889718 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889728 | orchestrator | 2026-03-28 06:01:00.889743 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:01:00.889752 | orchestrator | Saturday 28 March 2026 06:00:56 +0000 (0:00:00.798) 0:47:03.216 ******** 2026-03-28 06:01:00.889762 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889772 | orchestrator | 2026-03-28 06:01:00.889781 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:01:00.889791 | orchestrator | Saturday 28 March 2026 06:00:57 +0000 (0:00:00.818) 0:47:04.035 ******** 2026-03-28 06:01:00.889800 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889810 | orchestrator | 2026-03-28 06:01:00.889819 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:01:00.889829 | orchestrator | Saturday 28 March 2026 06:00:58 +0000 (0:00:00.811) 0:47:04.847 ******** 2026-03-28 06:01:00.889839 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889848 | orchestrator | 2026-03-28 06:01:00.889858 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:01:00.889867 | orchestrator | Saturday 28 March 2026 06:00:59 +0000 (0:00:00.805) 0:47:05.653 ******** 2026-03-28 06:01:00.889877 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:00.889887 | orchestrator | 2026-03-28 06:01:00.889896 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:01:00.889906 | orchestrator | Saturday 28 March 2026 06:01:00 +0000 (0:00:00.830) 0:47:06.484 ******** 2026-03-28 06:01:00.889916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:00.889926 | orchestrator | 2026-03-28 06:01:00.889941 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:01:42.743747 | orchestrator | Saturday 28 March 2026 06:01:00 +0000 (0:00:00.823) 0:47:07.308 ******** 2026-03-28 06:01:42.743872 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.743889 | orchestrator | 2026-03-28 06:01:42.743924 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:01:42.743935 | orchestrator | Saturday 28 March 2026 06:01:01 +0000 (0:00:00.875) 0:47:08.184 ******** 2026-03-28 06:01:42.743944 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.743954 | orchestrator | 2026-03-28 06:01:42.743964 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:01:42.743974 | orchestrator | Saturday 28 March 2026 06:01:02 +0000 (0:00:00.821) 0:47:09.005 ******** 2026-03-28 06:01:42.743984 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.743995 | orchestrator | 2026-03-28 06:01:42.744005 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:01:42.744015 | orchestrator | Saturday 28 March 2026 06:01:03 +0000 (0:00:00.809) 0:47:09.814 ******** 2026-03-28 06:01:42.744025 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.744035 | orchestrator | 2026-03-28 06:01:42.744044 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:01:42.744101 | orchestrator | Saturday 28 March 2026 06:01:04 +0000 (0:00:00.847) 0:47:10.662 ******** 2026-03-28 06:01:42.744111 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744121 | orchestrator | 2026-03-28 06:01:42.744130 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:01:42.744140 | orchestrator | Saturday 28 March 2026 06:01:05 +0000 (0:00:00.778) 0:47:11.441 ******** 2026-03-28 06:01:42.744150 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744160 | orchestrator | 2026-03-28 06:01:42.744170 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:01:42.744180 | orchestrator | Saturday 28 March 2026 06:01:05 +0000 (0:00:00.831) 0:47:12.273 ******** 2026-03-28 06:01:42.744190 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744200 | orchestrator | 2026-03-28 06:01:42.744210 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:01:42.744220 | orchestrator | Saturday 28 March 2026 06:01:06 +0000 (0:00:00.772) 0:47:13.046 ******** 2026-03-28 06:01:42.744229 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744239 | orchestrator | 2026-03-28 06:01:42.744249 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:01:42.744259 | orchestrator | Saturday 28 March 2026 06:01:07 +0000 (0:00:00.791) 0:47:13.837 ******** 2026-03-28 06:01:42.744269 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744281 | orchestrator | 2026-03-28 06:01:42.744292 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:01:42.744304 | orchestrator | Saturday 28 March 2026 06:01:08 +0000 (0:00:00.785) 0:47:14.623 ******** 2026-03-28 06:01:42.744315 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744326 | orchestrator | 2026-03-28 06:01:42.744337 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:01:42.744355 | orchestrator | Saturday 28 March 2026 06:01:09 +0000 (0:00:00.828) 0:47:15.451 ******** 2026-03-28 06:01:42.744371 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744390 | orchestrator | 2026-03-28 06:01:42.744408 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:01:42.744426 | orchestrator | Saturday 28 March 2026 06:01:09 +0000 (0:00:00.783) 0:47:16.235 ******** 2026-03-28 06:01:42.744445 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744463 | orchestrator | 2026-03-28 06:01:42.744477 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:01:42.744489 | orchestrator | Saturday 28 March 2026 06:01:10 +0000 (0:00:00.801) 0:47:17.036 ******** 2026-03-28 06:01:42.744500 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744511 | orchestrator | 2026-03-28 06:01:42.744522 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:01:42.744533 | orchestrator | Saturday 28 March 2026 06:01:11 +0000 (0:00:00.771) 0:47:17.808 ******** 2026-03-28 06:01:42.744544 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744555 | orchestrator | 2026-03-28 06:01:42.744573 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:01:42.744584 | orchestrator | Saturday 28 March 2026 06:01:12 +0000 (0:00:00.851) 0:47:18.660 ******** 2026-03-28 06:01:42.744595 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744607 | orchestrator | 2026-03-28 06:01:42.744618 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:01:42.744642 | orchestrator | Saturday 28 March 2026 06:01:13 +0000 (0:00:00.776) 0:47:19.436 ******** 2026-03-28 06:01:42.744652 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744662 | orchestrator | 2026-03-28 06:01:42.744671 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:01:42.744681 | orchestrator | Saturday 28 March 2026 06:01:13 +0000 (0:00:00.783) 0:47:20.219 ******** 2026-03-28 06:01:42.744691 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.744700 | orchestrator | 2026-03-28 06:01:42.744710 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:01:42.744719 | orchestrator | Saturday 28 March 2026 06:01:15 +0000 (0:00:01.598) 0:47:21.818 ******** 2026-03-28 06:01:42.744729 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.744739 | orchestrator | 2026-03-28 06:01:42.744748 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:01:42.744758 | orchestrator | Saturday 28 March 2026 06:01:18 +0000 (0:00:02.818) 0:47:24.636 ******** 2026-03-28 06:01:42.744767 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-28 06:01:42.744778 | orchestrator | 2026-03-28 06:01:42.744788 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:01:42.744798 | orchestrator | Saturday 28 March 2026 06:01:19 +0000 (0:00:01.155) 0:47:25.791 ******** 2026-03-28 06:01:42.744807 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744817 | orchestrator | 2026-03-28 06:01:42.744827 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:01:42.744853 | orchestrator | Saturday 28 March 2026 06:01:20 +0000 (0:00:01.186) 0:47:26.978 ******** 2026-03-28 06:01:42.744864 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744874 | orchestrator | 2026-03-28 06:01:42.744883 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:01:42.744893 | orchestrator | Saturday 28 March 2026 06:01:21 +0000 (0:00:01.142) 0:47:28.121 ******** 2026-03-28 06:01:42.744902 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:01:42.744912 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:01:42.744922 | orchestrator | 2026-03-28 06:01:42.744931 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:01:42.744941 | orchestrator | Saturday 28 March 2026 06:01:23 +0000 (0:00:01.801) 0:47:29.922 ******** 2026-03-28 06:01:42.744951 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.744960 | orchestrator | 2026-03-28 06:01:42.744970 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:01:42.744979 | orchestrator | Saturday 28 March 2026 06:01:24 +0000 (0:00:01.477) 0:47:31.399 ******** 2026-03-28 06:01:42.744989 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.744999 | orchestrator | 2026-03-28 06:01:42.745008 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:01:42.745018 | orchestrator | Saturday 28 March 2026 06:01:26 +0000 (0:00:01.150) 0:47:32.550 ******** 2026-03-28 06:01:42.745027 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745037 | orchestrator | 2026-03-28 06:01:42.745073 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:01:42.745091 | orchestrator | Saturday 28 March 2026 06:01:27 +0000 (0:00:00.929) 0:47:33.479 ******** 2026-03-28 06:01:42.745101 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745111 | orchestrator | 2026-03-28 06:01:42.745121 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:01:42.745138 | orchestrator | Saturday 28 March 2026 06:01:27 +0000 (0:00:00.777) 0:47:34.257 ******** 2026-03-28 06:01:42.745148 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-28 06:01:42.745157 | orchestrator | 2026-03-28 06:01:42.745167 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:01:42.745176 | orchestrator | Saturday 28 March 2026 06:01:28 +0000 (0:00:01.165) 0:47:35.423 ******** 2026-03-28 06:01:42.745186 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.745196 | orchestrator | 2026-03-28 06:01:42.745206 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:01:42.745216 | orchestrator | Saturday 28 March 2026 06:01:30 +0000 (0:00:01.698) 0:47:37.121 ******** 2026-03-28 06:01:42.745225 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:01:42.745235 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:01:42.745245 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:01:42.745254 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745264 | orchestrator | 2026-03-28 06:01:42.745273 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:01:42.745283 | orchestrator | Saturday 28 March 2026 06:01:31 +0000 (0:00:01.191) 0:47:38.313 ******** 2026-03-28 06:01:42.745293 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745302 | orchestrator | 2026-03-28 06:01:42.745312 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:01:42.745321 | orchestrator | Saturday 28 March 2026 06:01:33 +0000 (0:00:01.155) 0:47:39.469 ******** 2026-03-28 06:01:42.745331 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745341 | orchestrator | 2026-03-28 06:01:42.745351 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:01:42.745360 | orchestrator | Saturday 28 March 2026 06:01:34 +0000 (0:00:01.190) 0:47:40.659 ******** 2026-03-28 06:01:42.745370 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745380 | orchestrator | 2026-03-28 06:01:42.745389 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:01:42.745399 | orchestrator | Saturday 28 March 2026 06:01:35 +0000 (0:00:01.180) 0:47:41.840 ******** 2026-03-28 06:01:42.745409 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745418 | orchestrator | 2026-03-28 06:01:42.745428 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:01:42.745442 | orchestrator | Saturday 28 March 2026 06:01:36 +0000 (0:00:01.194) 0:47:43.034 ******** 2026-03-28 06:01:42.745452 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745462 | orchestrator | 2026-03-28 06:01:42.745472 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:01:42.745481 | orchestrator | Saturday 28 March 2026 06:01:37 +0000 (0:00:00.794) 0:47:43.829 ******** 2026-03-28 06:01:42.745491 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.745500 | orchestrator | 2026-03-28 06:01:42.745510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:01:42.745521 | orchestrator | Saturday 28 March 2026 06:01:39 +0000 (0:00:02.095) 0:47:45.924 ******** 2026-03-28 06:01:42.745537 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:01:42.745554 | orchestrator | 2026-03-28 06:01:42.745569 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:01:42.745585 | orchestrator | Saturday 28 March 2026 06:01:40 +0000 (0:00:00.780) 0:47:46.705 ******** 2026-03-28 06:01:42.745602 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-28 06:01:42.745617 | orchestrator | 2026-03-28 06:01:42.745632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:01:42.745648 | orchestrator | Saturday 28 March 2026 06:01:41 +0000 (0:00:01.284) 0:47:47.989 ******** 2026-03-28 06:01:42.745674 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:01:42.745691 | orchestrator | 2026-03-28 06:01:42.745708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:01:42.745737 | orchestrator | Saturday 28 March 2026 06:01:42 +0000 (0:00:01.175) 0:47:49.165 ******** 2026-03-28 06:02:27.280183 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280282 | orchestrator | 2026-03-28 06:02:27.280295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:02:27.280305 | orchestrator | Saturday 28 March 2026 06:01:43 +0000 (0:00:01.148) 0:47:50.313 ******** 2026-03-28 06:02:27.280314 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280322 | orchestrator | 2026-03-28 06:02:27.280330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:02:27.280338 | orchestrator | Saturday 28 March 2026 06:01:45 +0000 (0:00:01.155) 0:47:51.469 ******** 2026-03-28 06:02:27.280346 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280354 | orchestrator | 2026-03-28 06:02:27.280362 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:02:27.280370 | orchestrator | Saturday 28 March 2026 06:01:46 +0000 (0:00:01.163) 0:47:52.633 ******** 2026-03-28 06:02:27.280378 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280386 | orchestrator | 2026-03-28 06:02:27.280394 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:02:27.280402 | orchestrator | Saturday 28 March 2026 06:01:47 +0000 (0:00:01.201) 0:47:53.835 ******** 2026-03-28 06:02:27.280410 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280417 | orchestrator | 2026-03-28 06:02:27.280425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:02:27.280433 | orchestrator | Saturday 28 March 2026 06:01:48 +0000 (0:00:01.219) 0:47:55.055 ******** 2026-03-28 06:02:27.280441 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280449 | orchestrator | 2026-03-28 06:02:27.280457 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:02:27.280465 | orchestrator | Saturday 28 March 2026 06:01:49 +0000 (0:00:01.141) 0:47:56.197 ******** 2026-03-28 06:02:27.280473 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280480 | orchestrator | 2026-03-28 06:02:27.280488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:02:27.280496 | orchestrator | Saturday 28 March 2026 06:01:50 +0000 (0:00:01.156) 0:47:57.353 ******** 2026-03-28 06:02:27.280504 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:02:27.280513 | orchestrator | 2026-03-28 06:02:27.280521 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:02:27.280529 | orchestrator | Saturday 28 March 2026 06:01:51 +0000 (0:00:00.861) 0:47:58.215 ******** 2026-03-28 06:02:27.280537 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-28 06:02:27.280546 | orchestrator | 2026-03-28 06:02:27.280553 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:02:27.280561 | orchestrator | Saturday 28 March 2026 06:01:53 +0000 (0:00:01.261) 0:47:59.477 ******** 2026-03-28 06:02:27.280569 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 06:02:27.280578 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 06:02:27.280586 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 06:02:27.280593 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 06:02:27.280601 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 06:02:27.280609 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 06:02:27.280617 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 06:02:27.280625 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:02:27.280633 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:02:27.280641 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:02:27.280669 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:02:27.280677 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:02:27.280685 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:02:27.280693 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:02:27.280701 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 06:02:27.280709 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 06:02:27.280716 | orchestrator | 2026-03-28 06:02:27.280725 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:02:27.280746 | orchestrator | Saturday 28 March 2026 06:01:59 +0000 (0:00:06.116) 0:48:05.594 ******** 2026-03-28 06:02:27.280756 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-28 06:02:27.280766 | orchestrator | 2026-03-28 06:02:27.280776 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:02:27.280785 | orchestrator | Saturday 28 March 2026 06:02:00 +0000 (0:00:01.185) 0:48:06.779 ******** 2026-03-28 06:02:27.280795 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:02:27.280805 | orchestrator | 2026-03-28 06:02:27.280814 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:02:27.280823 | orchestrator | Saturday 28 March 2026 06:02:01 +0000 (0:00:01.537) 0:48:08.317 ******** 2026-03-28 06:02:27.280833 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:02:27.280842 | orchestrator | 2026-03-28 06:02:27.280852 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:02:27.280861 | orchestrator | Saturday 28 March 2026 06:02:03 +0000 (0:00:01.660) 0:48:09.978 ******** 2026-03-28 06:02:27.280871 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280880 | orchestrator | 2026-03-28 06:02:27.280890 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:02:27.280913 | orchestrator | Saturday 28 March 2026 06:02:04 +0000 (0:00:00.801) 0:48:10.779 ******** 2026-03-28 06:02:27.280923 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280931 | orchestrator | 2026-03-28 06:02:27.280941 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:02:27.280950 | orchestrator | Saturday 28 March 2026 06:02:05 +0000 (0:00:00.781) 0:48:11.560 ******** 2026-03-28 06:02:27.280960 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.280969 | orchestrator | 2026-03-28 06:02:27.280978 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:02:27.280988 | orchestrator | Saturday 28 March 2026 06:02:05 +0000 (0:00:00.820) 0:48:12.381 ******** 2026-03-28 06:02:27.280998 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281008 | orchestrator | 2026-03-28 06:02:27.281017 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:02:27.281047 | orchestrator | Saturday 28 March 2026 06:02:06 +0000 (0:00:00.782) 0:48:13.164 ******** 2026-03-28 06:02:27.281056 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281066 | orchestrator | 2026-03-28 06:02:27.281075 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:02:27.281085 | orchestrator | Saturday 28 March 2026 06:02:07 +0000 (0:00:00.788) 0:48:13.953 ******** 2026-03-28 06:02:27.281093 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281101 | orchestrator | 2026-03-28 06:02:27.281109 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:02:27.281117 | orchestrator | Saturday 28 March 2026 06:02:08 +0000 (0:00:00.838) 0:48:14.792 ******** 2026-03-28 06:02:27.281125 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281139 | orchestrator | 2026-03-28 06:02:27.281147 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:02:27.281155 | orchestrator | Saturday 28 March 2026 06:02:09 +0000 (0:00:00.794) 0:48:15.586 ******** 2026-03-28 06:02:27.281163 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281171 | orchestrator | 2026-03-28 06:02:27.281179 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:02:27.281187 | orchestrator | Saturday 28 March 2026 06:02:09 +0000 (0:00:00.788) 0:48:16.375 ******** 2026-03-28 06:02:27.281195 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281202 | orchestrator | 2026-03-28 06:02:27.281210 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:02:27.281218 | orchestrator | Saturday 28 March 2026 06:02:10 +0000 (0:00:00.861) 0:48:17.236 ******** 2026-03-28 06:02:27.281226 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281234 | orchestrator | 2026-03-28 06:02:27.281242 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:02:27.281250 | orchestrator | Saturday 28 March 2026 06:02:11 +0000 (0:00:00.801) 0:48:18.038 ******** 2026-03-28 06:02:27.281258 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:02:27.281265 | orchestrator | 2026-03-28 06:02:27.281273 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:02:27.281281 | orchestrator | Saturday 28 March 2026 06:02:12 +0000 (0:00:00.874) 0:48:18.913 ******** 2026-03-28 06:02:27.281289 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:02:27.281297 | orchestrator | 2026-03-28 06:02:27.281305 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:02:27.281313 | orchestrator | Saturday 28 March 2026 06:02:16 +0000 (0:00:04.160) 0:48:23.074 ******** 2026-03-28 06:02:27.281321 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:02:27.281329 | orchestrator | 2026-03-28 06:02:27.281337 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:02:27.281345 | orchestrator | Saturday 28 March 2026 06:02:17 +0000 (0:00:00.859) 0:48:23.933 ******** 2026-03-28 06:02:27.281355 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-28 06:02:27.281370 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-28 06:02:27.281379 | orchestrator | 2026-03-28 06:02:27.281388 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:02:27.281395 | orchestrator | Saturday 28 March 2026 06:02:24 +0000 (0:00:07.296) 0:48:31.230 ******** 2026-03-28 06:02:27.281403 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281411 | orchestrator | 2026-03-28 06:02:27.281419 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:02:27.281427 | orchestrator | Saturday 28 March 2026 06:02:25 +0000 (0:00:00.817) 0:48:32.047 ******** 2026-03-28 06:02:27.281435 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281442 | orchestrator | 2026-03-28 06:02:27.281450 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:02:27.281458 | orchestrator | Saturday 28 March 2026 06:02:26 +0000 (0:00:00.801) 0:48:32.849 ******** 2026-03-28 06:02:27.281466 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:02:27.281474 | orchestrator | 2026-03-28 06:02:27.281489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:02:27.281502 | orchestrator | Saturday 28 March 2026 06:02:27 +0000 (0:00:00.849) 0:48:33.698 ******** 2026-03-28 06:03:14.053300 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053387 | orchestrator | 2026-03-28 06:03:14.053397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:03:14.053405 | orchestrator | Saturday 28 March 2026 06:02:28 +0000 (0:00:00.810) 0:48:34.509 ******** 2026-03-28 06:03:14.053411 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053416 | orchestrator | 2026-03-28 06:03:14.053422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:03:14.053428 | orchestrator | Saturday 28 March 2026 06:02:28 +0000 (0:00:00.810) 0:48:35.319 ******** 2026-03-28 06:03:14.053434 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.053440 | orchestrator | 2026-03-28 06:03:14.053446 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:03:14.053451 | orchestrator | Saturday 28 March 2026 06:02:29 +0000 (0:00:00.905) 0:48:36.225 ******** 2026-03-28 06:03:14.053457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:03:14.053463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:03:14.053469 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:03:14.053474 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053480 | orchestrator | 2026-03-28 06:03:14.053486 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:03:14.053491 | orchestrator | Saturday 28 March 2026 06:02:31 +0000 (0:00:01.531) 0:48:37.757 ******** 2026-03-28 06:03:14.053497 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:03:14.053502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:03:14.053511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:03:14.053517 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053522 | orchestrator | 2026-03-28 06:03:14.053528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:03:14.053534 | orchestrator | Saturday 28 March 2026 06:02:32 +0000 (0:00:01.470) 0:48:39.227 ******** 2026-03-28 06:03:14.053539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:03:14.053545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:03:14.053550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:03:14.053556 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053562 | orchestrator | 2026-03-28 06:03:14.053567 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:03:14.053573 | orchestrator | Saturday 28 March 2026 06:02:33 +0000 (0:00:01.100) 0:48:40.328 ******** 2026-03-28 06:03:14.053579 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.053585 | orchestrator | 2026-03-28 06:03:14.053590 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:03:14.053596 | orchestrator | Saturday 28 March 2026 06:02:34 +0000 (0:00:00.810) 0:48:41.138 ******** 2026-03-28 06:03:14.053602 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:03:14.053608 | orchestrator | 2026-03-28 06:03:14.053613 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:03:14.053619 | orchestrator | Saturday 28 March 2026 06:02:35 +0000 (0:00:01.095) 0:48:42.233 ******** 2026-03-28 06:03:14.053625 | orchestrator | changed: [testbed-node-5] 2026-03-28 06:03:14.053630 | orchestrator | 2026-03-28 06:03:14.053636 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 06:03:14.053642 | orchestrator | Saturday 28 March 2026 06:02:37 +0000 (0:00:01.447) 0:48:43.681 ******** 2026-03-28 06:03:14.053647 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.053653 | orchestrator | 2026-03-28 06:03:14.053659 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 06:03:14.053682 | orchestrator | Saturday 28 March 2026 06:02:38 +0000 (0:00:00.782) 0:48:44.463 ******** 2026-03-28 06:03:14.053688 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:03:14.053694 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:03:14.053699 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:03:14.053705 | orchestrator | 2026-03-28 06:03:14.053710 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 06:03:14.053726 | orchestrator | Saturday 28 March 2026 06:02:39 +0000 (0:00:01.673) 0:48:46.137 ******** 2026-03-28 06:03:14.053732 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-28 06:03:14.053738 | orchestrator | 2026-03-28 06:03:14.053743 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 06:03:14.053749 | orchestrator | Saturday 28 March 2026 06:02:40 +0000 (0:00:01.246) 0:48:47.384 ******** 2026-03-28 06:03:14.053754 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053760 | orchestrator | 2026-03-28 06:03:14.053765 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 06:03:14.053771 | orchestrator | Saturday 28 March 2026 06:02:42 +0000 (0:00:01.130) 0:48:48.514 ******** 2026-03-28 06:03:14.053776 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053782 | orchestrator | 2026-03-28 06:03:14.053787 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 06:03:14.053793 | orchestrator | Saturday 28 March 2026 06:02:43 +0000 (0:00:01.142) 0:48:49.657 ******** 2026-03-28 06:03:14.053798 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.053804 | orchestrator | 2026-03-28 06:03:14.053809 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 06:03:14.053815 | orchestrator | Saturday 28 March 2026 06:02:44 +0000 (0:00:01.529) 0:48:51.187 ******** 2026-03-28 06:03:14.053820 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.053826 | orchestrator | 2026-03-28 06:03:14.053831 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 06:03:14.053837 | orchestrator | Saturday 28 March 2026 06:02:45 +0000 (0:00:01.164) 0:48:52.352 ******** 2026-03-28 06:03:14.053852 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 06:03:14.053859 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 06:03:14.053867 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 06:03:14.053873 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 06:03:14.053879 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 06:03:14.053886 | orchestrator | 2026-03-28 06:03:14.053892 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 06:03:14.053899 | orchestrator | Saturday 28 March 2026 06:02:48 +0000 (0:00:02.511) 0:48:54.863 ******** 2026-03-28 06:03:14.053905 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.053911 | orchestrator | 2026-03-28 06:03:14.053918 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 06:03:14.053925 | orchestrator | Saturday 28 March 2026 06:02:49 +0000 (0:00:00.770) 0:48:55.633 ******** 2026-03-28 06:03:14.053931 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-28 06:03:14.053938 | orchestrator | 2026-03-28 06:03:14.053944 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 06:03:14.053950 | orchestrator | Saturday 28 March 2026 06:02:50 +0000 (0:00:01.123) 0:48:56.757 ******** 2026-03-28 06:03:14.053957 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 06:03:14.053963 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-28 06:03:14.053970 | orchestrator | 2026-03-28 06:03:14.053981 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 06:03:14.053988 | orchestrator | Saturday 28 March 2026 06:02:52 +0000 (0:00:01.874) 0:48:58.632 ******** 2026-03-28 06:03:14.053994 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:03:14.054001 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:03:14.054069 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:03:14.054076 | orchestrator | 2026-03-28 06:03:14.054083 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:03:14.054090 | orchestrator | Saturday 28 March 2026 06:02:55 +0000 (0:00:03.204) 0:49:01.836 ******** 2026-03-28 06:03:14.054096 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-28 06:03:14.054102 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:03:14.054109 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054116 | orchestrator | 2026-03-28 06:03:14.054122 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 06:03:14.054129 | orchestrator | Saturday 28 March 2026 06:02:56 +0000 (0:00:01.587) 0:49:03.423 ******** 2026-03-28 06:03:14.054135 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.054142 | orchestrator | 2026-03-28 06:03:14.054149 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 06:03:14.054155 | orchestrator | Saturday 28 March 2026 06:02:57 +0000 (0:00:00.902) 0:49:04.326 ******** 2026-03-28 06:03:14.054162 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.054168 | orchestrator | 2026-03-28 06:03:14.054175 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 06:03:14.054181 | orchestrator | Saturday 28 March 2026 06:02:58 +0000 (0:00:00.786) 0:49:05.113 ******** 2026-03-28 06:03:14.054188 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.054194 | orchestrator | 2026-03-28 06:03:14.054201 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 06:03:14.054208 | orchestrator | Saturday 28 March 2026 06:02:59 +0000 (0:00:00.804) 0:49:05.917 ******** 2026-03-28 06:03:14.054214 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-28 06:03:14.054221 | orchestrator | 2026-03-28 06:03:14.054227 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 06:03:14.054234 | orchestrator | Saturday 28 March 2026 06:03:00 +0000 (0:00:01.329) 0:49:07.246 ******** 2026-03-28 06:03:14.054241 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054248 | orchestrator | 2026-03-28 06:03:14.054254 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 06:03:14.054263 | orchestrator | Saturday 28 March 2026 06:03:02 +0000 (0:00:01.462) 0:49:08.709 ******** 2026-03-28 06:03:14.054269 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054274 | orchestrator | 2026-03-28 06:03:14.054280 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 06:03:14.054285 | orchestrator | Saturday 28 March 2026 06:03:05 +0000 (0:00:03.368) 0:49:12.078 ******** 2026-03-28 06:03:14.054291 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-28 06:03:14.054297 | orchestrator | 2026-03-28 06:03:14.054302 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 06:03:14.054308 | orchestrator | Saturday 28 March 2026 06:03:06 +0000 (0:00:01.159) 0:49:13.237 ******** 2026-03-28 06:03:14.054313 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054319 | orchestrator | 2026-03-28 06:03:14.054324 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 06:03:14.054330 | orchestrator | Saturday 28 March 2026 06:03:08 +0000 (0:00:01.977) 0:49:15.215 ******** 2026-03-28 06:03:14.054335 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054341 | orchestrator | 2026-03-28 06:03:14.054346 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 06:03:14.054352 | orchestrator | Saturday 28 March 2026 06:03:10 +0000 (0:00:01.942) 0:49:17.158 ******** 2026-03-28 06:03:14.054362 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:03:14.054367 | orchestrator | 2026-03-28 06:03:14.054373 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 06:03:14.054378 | orchestrator | Saturday 28 March 2026 06:03:12 +0000 (0:00:02.191) 0:49:19.350 ******** 2026-03-28 06:03:14.054384 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:03:14.054389 | orchestrator | 2026-03-28 06:03:14.054399 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 06:05:26.779211 | orchestrator | Saturday 28 March 2026 06:03:14 +0000 (0:00:01.126) 0:49:20.476 ******** 2026-03-28 06:05:26.779361 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.779389 | orchestrator | 2026-03-28 06:05:26.779409 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 06:05:26.779429 | orchestrator | Saturday 28 March 2026 06:03:15 +0000 (0:00:01.129) 0:49:21.606 ******** 2026-03-28 06:05:26.779447 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-28 06:05:26.779466 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 06:05:26.779484 | orchestrator | 2026-03-28 06:05:26.779505 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 06:05:26.779524 | orchestrator | Saturday 28 March 2026 06:03:17 +0000 (0:00:01.851) 0:49:23.457 ******** 2026-03-28 06:05:26.779542 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-28 06:05:26.779561 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 06:05:26.779580 | orchestrator | 2026-03-28 06:05:26.779596 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 06:05:26.779615 | orchestrator | Saturday 28 March 2026 06:03:19 +0000 (0:00:02.870) 0:49:26.328 ******** 2026-03-28 06:05:26.779635 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-28 06:05:26.779653 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 06:05:26.779672 | orchestrator | 2026-03-28 06:05:26.779690 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 06:05:26.779710 | orchestrator | Saturday 28 March 2026 06:03:24 +0000 (0:00:04.160) 0:49:30.489 ******** 2026-03-28 06:05:26.779728 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.779746 | orchestrator | 2026-03-28 06:05:26.779764 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 06:05:26.779784 | orchestrator | Saturday 28 March 2026 06:03:25 +0000 (0:00:01.332) 0:49:31.821 ******** 2026-03-28 06:05:26.779804 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-28 06:05:26.779824 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:05:26.779842 | orchestrator | 2026-03-28 06:05:26.779860 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 06:05:26.779878 | orchestrator | Saturday 28 March 2026 06:03:38 +0000 (0:00:12.951) 0:49:44.773 ******** 2026-03-28 06:05:26.779896 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.779913 | orchestrator | 2026-03-28 06:05:26.779931 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-28 06:05:26.779981 | orchestrator | Saturday 28 March 2026 06:03:39 +0000 (0:00:00.892) 0:49:45.665 ******** 2026-03-28 06:05:26.780003 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780020 | orchestrator | 2026-03-28 06:05:26.780040 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-28 06:05:26.780059 | orchestrator | Saturday 28 March 2026 06:03:40 +0000 (0:00:00.774) 0:49:46.440 ******** 2026-03-28 06:05:26.780077 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780096 | orchestrator | 2026-03-28 06:05:26.780115 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-28 06:05:26.780134 | orchestrator | Saturday 28 March 2026 06:03:40 +0000 (0:00:00.838) 0:49:47.279 ******** 2026-03-28 06:05:26.780153 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:05:26.780171 | orchestrator | 2026-03-28 06:05:26.780224 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 06:05:26.780243 | orchestrator | Saturday 28 March 2026 06:03:42 +0000 (0:00:01.922) 0:49:49.201 ******** 2026-03-28 06:05:26.780261 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780280 | orchestrator | 2026-03-28 06:05:26.780299 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 06:05:26.780317 | orchestrator | Saturday 28 March 2026 06:03:43 +0000 (0:00:00.798) 0:49:50.000 ******** 2026-03-28 06:05:26.780335 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780355 | orchestrator | 2026-03-28 06:05:26.780374 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 06:05:26.780392 | orchestrator | Saturday 28 March 2026 06:03:44 +0000 (0:00:00.767) 0:49:50.768 ******** 2026-03-28 06:05:26.780410 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780429 | orchestrator | 2026-03-28 06:05:26.780466 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 06:05:26.780485 | orchestrator | Saturday 28 March 2026 06:03:45 +0000 (0:00:00.766) 0:49:51.534 ******** 2026-03-28 06:05:26.780503 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780522 | orchestrator | 2026-03-28 06:05:26.780540 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 06:05:26.780559 | orchestrator | Saturday 28 March 2026 06:03:45 +0000 (0:00:00.773) 0:49:52.308 ******** 2026-03-28 06:05:26.780578 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780596 | orchestrator | 2026-03-28 06:05:26.780614 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 06:05:26.780633 | orchestrator | Saturday 28 March 2026 06:03:46 +0000 (0:00:00.792) 0:49:53.101 ******** 2026-03-28 06:05:26.780651 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780670 | orchestrator | 2026-03-28 06:05:26.780688 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 06:05:26.780707 | orchestrator | Saturday 28 March 2026 06:03:47 +0000 (0:00:00.774) 0:49:53.875 ******** 2026-03-28 06:05:26.780726 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:05:26.780744 | orchestrator | 2026-03-28 06:05:26.780762 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-28 06:05:26.780781 | orchestrator | 2026-03-28 06:05:26.780799 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:05:26.780817 | orchestrator | Saturday 28 March 2026 06:03:49 +0000 (0:00:01.928) 0:49:55.804 ******** 2026-03-28 06:05:26.780836 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:05:26.780855 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:05:26.780873 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:05:26.780891 | orchestrator | 2026-03-28 06:05:26.780910 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:05:26.780983 | orchestrator | Saturday 28 March 2026 06:03:51 +0000 (0:00:01.693) 0:49:57.497 ******** 2026-03-28 06:05:26.781003 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:05:26.781022 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:05:26.781041 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:05:26.781059 | orchestrator | 2026-03-28 06:05:26.781078 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-28 06:05:26.781096 | orchestrator | Saturday 28 March 2026 06:03:52 +0000 (0:00:01.408) 0:49:58.906 ******** 2026-03-28 06:05:26.781115 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-28 06:05:26.781134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-28 06:05:26.781153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-28 06:05:26.781172 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-28 06:05:26.781192 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-28 06:05:26.781223 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-28 06:05:26.781242 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-28 06:05:26.781261 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-28 06:05:26.781280 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-28 06:05:26.781298 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-28 06:05:26.781317 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-28 06:05:26.781336 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-28 06:05:26.781355 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-28 06:05:26.781373 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-28 06:05:26.781392 | orchestrator | 2026-03-28 06:05:26.781410 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-28 06:05:26.781429 | orchestrator | Saturday 28 March 2026 06:05:08 +0000 (0:01:15.639) 0:51:14.545 ******** 2026-03-28 06:05:26.781448 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-28 06:05:26.781466 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-28 06:05:26.781485 | orchestrator | 2026-03-28 06:05:26.781503 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-28 06:05:26.781520 | orchestrator | Saturday 28 March 2026 06:05:13 +0000 (0:00:05.337) 0:51:19.882 ******** 2026-03-28 06:05:26.781538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:05:26.781556 | orchestrator | 2026-03-28 06:05:26.781573 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-28 06:05:26.781590 | orchestrator | 2026-03-28 06:05:26.781609 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:05:26.781626 | orchestrator | Saturday 28 March 2026 06:05:16 +0000 (0:00:03.388) 0:51:23.271 ******** 2026-03-28 06:05:26.781644 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-28 06:05:26.781662 | orchestrator | 2026-03-28 06:05:26.781681 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:05:26.781708 | orchestrator | Saturday 28 March 2026 06:05:18 +0000 (0:00:01.181) 0:51:24.453 ******** 2026-03-28 06:05:26.781727 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.781745 | orchestrator | 2026-03-28 06:05:26.781764 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:05:26.781782 | orchestrator | Saturday 28 March 2026 06:05:19 +0000 (0:00:01.470) 0:51:25.923 ******** 2026-03-28 06:05:26.781801 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.781819 | orchestrator | 2026-03-28 06:05:26.781837 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:05:26.781856 | orchestrator | Saturday 28 March 2026 06:05:20 +0000 (0:00:01.153) 0:51:27.077 ******** 2026-03-28 06:05:26.781874 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.781893 | orchestrator | 2026-03-28 06:05:26.781911 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:05:26.781929 | orchestrator | Saturday 28 March 2026 06:05:22 +0000 (0:00:01.491) 0:51:28.568 ******** 2026-03-28 06:05:26.781948 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.782109 | orchestrator | 2026-03-28 06:05:26.782129 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:05:26.782160 | orchestrator | Saturday 28 March 2026 06:05:23 +0000 (0:00:01.139) 0:51:29.708 ******** 2026-03-28 06:05:26.782192 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.782210 | orchestrator | 2026-03-28 06:05:26.782229 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:05:26.782247 | orchestrator | Saturday 28 March 2026 06:05:24 +0000 (0:00:01.193) 0:51:30.902 ******** 2026-03-28 06:05:26.782265 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:26.782284 | orchestrator | 2026-03-28 06:05:26.782302 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:05:26.782321 | orchestrator | Saturday 28 March 2026 06:05:25 +0000 (0:00:01.142) 0:51:32.044 ******** 2026-03-28 06:05:26.782353 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.660648 | orchestrator | 2026-03-28 06:05:51.660758 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:05:51.660775 | orchestrator | Saturday 28 March 2026 06:05:26 +0000 (0:00:01.158) 0:51:33.203 ******** 2026-03-28 06:05:51.660783 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.660790 | orchestrator | 2026-03-28 06:05:51.660796 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:05:51.660801 | orchestrator | Saturday 28 March 2026 06:05:28 +0000 (0:00:01.283) 0:51:34.486 ******** 2026-03-28 06:05:51.660807 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 06:05:51.660812 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:05:51.660818 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:05:51.660823 | orchestrator | 2026-03-28 06:05:51.660828 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:05:51.660834 | orchestrator | Saturday 28 March 2026 06:05:29 +0000 (0:00:01.818) 0:51:36.305 ******** 2026-03-28 06:05:51.660842 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.660850 | orchestrator | 2026-03-28 06:05:51.660858 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:05:51.660865 | orchestrator | Saturday 28 March 2026 06:05:31 +0000 (0:00:01.244) 0:51:37.549 ******** 2026-03-28 06:05:51.660873 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 06:05:51.660881 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:05:51.660889 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:05:51.660898 | orchestrator | 2026-03-28 06:05:51.660905 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:05:51.660913 | orchestrator | Saturday 28 March 2026 06:05:33 +0000 (0:00:02.878) 0:51:40.428 ******** 2026-03-28 06:05:51.660922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 06:05:51.660930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 06:05:51.660938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 06:05:51.661010 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661018 | orchestrator | 2026-03-28 06:05:51.661026 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:05:51.661035 | orchestrator | Saturday 28 March 2026 06:05:35 +0000 (0:00:01.499) 0:51:41.928 ******** 2026-03-28 06:05:51.661046 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661057 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661098 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661106 | orchestrator | 2026-03-28 06:05:51.661114 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:05:51.661122 | orchestrator | Saturday 28 March 2026 06:05:37 +0000 (0:00:01.680) 0:51:43.608 ******** 2026-03-28 06:05:51.661147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661159 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:05:51.661176 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661184 | orchestrator | 2026-03-28 06:05:51.661192 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:05:51.661218 | orchestrator | Saturday 28 March 2026 06:05:38 +0000 (0:00:01.199) 0:51:44.808 ******** 2026-03-28 06:05:51.661230 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:05:31.664037', 'end': '2026-03-28 06:05:31.715356', 'delta': '0:00:00.051319', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:05:51.661242 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:05:32.238911', 'end': '2026-03-28 06:05:32.282576', 'delta': '0:00:00.043665', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:05:51.661250 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:05:32.804238', 'end': '2026-03-28 06:05:32.863122', 'delta': '0:00:00.058884', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:05:51.661265 | orchestrator | 2026-03-28 06:05:51.661273 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:05:51.661281 | orchestrator | Saturday 28 March 2026 06:05:39 +0000 (0:00:01.278) 0:51:46.086 ******** 2026-03-28 06:05:51.661288 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.661296 | orchestrator | 2026-03-28 06:05:51.661303 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:05:51.661311 | orchestrator | Saturday 28 March 2026 06:05:41 +0000 (0:00:01.361) 0:51:47.448 ******** 2026-03-28 06:05:51.661319 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661328 | orchestrator | 2026-03-28 06:05:51.661336 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:05:51.661348 | orchestrator | Saturday 28 March 2026 06:05:42 +0000 (0:00:01.249) 0:51:48.698 ******** 2026-03-28 06:05:51.661357 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.661365 | orchestrator | 2026-03-28 06:05:51.661373 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:05:51.661381 | orchestrator | Saturday 28 March 2026 06:05:43 +0000 (0:00:01.164) 0:51:49.862 ******** 2026-03-28 06:05:51.661388 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.661396 | orchestrator | 2026-03-28 06:05:51.661405 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:05:51.661413 | orchestrator | Saturday 28 March 2026 06:05:45 +0000 (0:00:02.035) 0:51:51.897 ******** 2026-03-28 06:05:51.661421 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:05:51.661430 | orchestrator | 2026-03-28 06:05:51.661438 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:05:51.661446 | orchestrator | Saturday 28 March 2026 06:05:46 +0000 (0:00:01.171) 0:51:53.069 ******** 2026-03-28 06:05:51.661454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661462 | orchestrator | 2026-03-28 06:05:51.661471 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:05:51.661479 | orchestrator | Saturday 28 March 2026 06:05:47 +0000 (0:00:01.315) 0:51:54.384 ******** 2026-03-28 06:05:51.661487 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661495 | orchestrator | 2026-03-28 06:05:51.661503 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:05:51.661511 | orchestrator | Saturday 28 March 2026 06:05:49 +0000 (0:00:01.291) 0:51:55.676 ******** 2026-03-28 06:05:51.661519 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:05:51.661527 | orchestrator | 2026-03-28 06:05:51.661535 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:05:51.661543 | orchestrator | Saturday 28 March 2026 06:05:50 +0000 (0:00:01.220) 0:51:56.896 ******** 2026-03-28 06:05:51.661557 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.260928 | orchestrator | 2026-03-28 06:06:00.261077 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:06:00.261087 | orchestrator | Saturday 28 March 2026 06:05:51 +0000 (0:00:01.189) 0:51:58.086 ******** 2026-03-28 06:06:00.261094 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261101 | orchestrator | 2026-03-28 06:06:00.261108 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:06:00.261114 | orchestrator | Saturday 28 March 2026 06:05:52 +0000 (0:00:01.199) 0:51:59.285 ******** 2026-03-28 06:06:00.261121 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261127 | orchestrator | 2026-03-28 06:06:00.261133 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:06:00.261139 | orchestrator | Saturday 28 March 2026 06:05:54 +0000 (0:00:01.225) 0:52:00.510 ******** 2026-03-28 06:06:00.261146 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261152 | orchestrator | 2026-03-28 06:06:00.261158 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:06:00.261182 | orchestrator | Saturday 28 March 2026 06:05:55 +0000 (0:00:01.154) 0:52:01.665 ******** 2026-03-28 06:06:00.261188 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261194 | orchestrator | 2026-03-28 06:06:00.261200 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:06:00.261207 | orchestrator | Saturday 28 March 2026 06:05:56 +0000 (0:00:01.193) 0:52:02.859 ******** 2026-03-28 06:06:00.261213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261219 | orchestrator | 2026-03-28 06:06:00.261225 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:06:00.261231 | orchestrator | Saturday 28 March 2026 06:05:57 +0000 (0:00:01.152) 0:52:04.011 ******** 2026-03-28 06:06:00.261240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:06:00.261285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:06:00.261331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:06:00.261347 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:00.261353 | orchestrator | 2026-03-28 06:06:00.261359 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:06:00.261365 | orchestrator | Saturday 28 March 2026 06:05:58 +0000 (0:00:01.340) 0:52:05.352 ******** 2026-03-28 06:06:00.261372 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:00.261386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274225 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '791014d9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1', 'scsi-SQEMU_QEMU_HARDDISK_791014d9-bcf5-4b2a-8a4f-8adbb33edda6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274366 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:06:08.274387 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:08.274408 | orchestrator | 2026-03-28 06:06:08.274427 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:06:08.274446 | orchestrator | Saturday 28 March 2026 06:06:00 +0000 (0:00:01.334) 0:52:06.687 ******** 2026-03-28 06:06:08.274464 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:06:08.274482 | orchestrator | 2026-03-28 06:06:08.274502 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:06:08.274533 | orchestrator | Saturday 28 March 2026 06:06:01 +0000 (0:00:01.538) 0:52:08.225 ******** 2026-03-28 06:06:08.274552 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:06:08.274572 | orchestrator | 2026-03-28 06:06:08.274590 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:06:08.274609 | orchestrator | Saturday 28 March 2026 06:06:02 +0000 (0:00:01.152) 0:52:09.377 ******** 2026-03-28 06:06:08.274626 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:06:08.274645 | orchestrator | 2026-03-28 06:06:08.274665 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:06:08.274684 | orchestrator | Saturday 28 March 2026 06:06:04 +0000 (0:00:01.591) 0:52:10.969 ******** 2026-03-28 06:06:08.274703 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:08.274721 | orchestrator | 2026-03-28 06:06:08.274739 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:06:08.274757 | orchestrator | Saturday 28 March 2026 06:06:05 +0000 (0:00:01.176) 0:52:12.145 ******** 2026-03-28 06:06:08.274774 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:08.274793 | orchestrator | 2026-03-28 06:06:08.274811 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:06:08.274829 | orchestrator | Saturday 28 March 2026 06:06:07 +0000 (0:00:01.366) 0:52:13.512 ******** 2026-03-28 06:06:08.274847 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:06:08.274866 | orchestrator | 2026-03-28 06:06:08.274883 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:06:08.274919 | orchestrator | Saturday 28 March 2026 06:06:08 +0000 (0:00:01.186) 0:52:14.698 ******** 2026-03-28 06:07:01.461428 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 06:07:01.461500 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 06:07:01.461506 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 06:07:01.461511 | orchestrator | 2026-03-28 06:07:01.461516 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:07:01.461522 | orchestrator | Saturday 28 March 2026 06:06:10 +0000 (0:00:01.754) 0:52:16.452 ******** 2026-03-28 06:07:01.461527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 06:07:01.461532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 06:07:01.461536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 06:07:01.461541 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:07:01.461545 | orchestrator | 2026-03-28 06:07:01.461549 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:07:01.461554 | orchestrator | Saturday 28 March 2026 06:06:11 +0000 (0:00:01.235) 0:52:17.688 ******** 2026-03-28 06:07:01.461558 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:07:01.461562 | orchestrator | 2026-03-28 06:07:01.461566 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:07:01.461570 | orchestrator | Saturday 28 March 2026 06:06:12 +0000 (0:00:01.162) 0:52:18.850 ******** 2026-03-28 06:07:01.461575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 06:07:01.461579 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:07:01.461584 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:07:01.461588 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:07:01.461592 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:07:01.461596 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:07:01.461600 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:07:01.461605 | orchestrator | 2026-03-28 06:07:01.461609 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:07:01.461627 | orchestrator | Saturday 28 March 2026 06:06:14 +0000 (0:00:02.270) 0:52:21.121 ******** 2026-03-28 06:07:01.461631 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 06:07:01.461635 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:07:01.461639 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:07:01.461644 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:07:01.461648 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:07:01.461652 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:07:01.461656 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:07:01.461660 | orchestrator | 2026-03-28 06:07:01.461673 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-28 06:07:01.461677 | orchestrator | Saturday 28 March 2026 06:06:17 +0000 (0:00:02.711) 0:52:23.833 ******** 2026-03-28 06:07:01.461682 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:07:01.461686 | orchestrator | 2026-03-28 06:07:01.461690 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-28 06:07:01.461694 | orchestrator | Saturday 28 March 2026 06:06:20 +0000 (0:00:03.238) 0:52:27.072 ******** 2026-03-28 06:07:01.461699 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:07:01.461703 | orchestrator | 2026-03-28 06:07:01.461707 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-28 06:07:01.461711 | orchestrator | Saturday 28 March 2026 06:06:23 +0000 (0:00:02.940) 0:52:30.012 ******** 2026-03-28 06:07:01.461715 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:07:01.461719 | orchestrator | 2026-03-28 06:07:01.461723 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-28 06:07:01.461728 | orchestrator | Saturday 28 March 2026 06:06:25 +0000 (0:00:02.255) 0:52:32.268 ******** 2026-03-28 06:07:01.461734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4764', 'value': {'gid': 4764, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 3, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/1786992503', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 1786992503}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 1786992503}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-28 06:07:01.461740 | orchestrator | 2026-03-28 06:07:01.461744 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-28 06:07:01.461748 | orchestrator | Saturday 28 March 2026 06:06:27 +0000 (0:00:01.268) 0:52:33.536 ******** 2026-03-28 06:07:01.461762 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 06:07:01.461766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 06:07:01.461770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-03-28 06:07:01.461775 | orchestrator | 2026-03-28 06:07:01.461779 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-28 06:07:01.461783 | orchestrator | Saturday 28 March 2026 06:06:28 +0000 (0:00:01.627) 0:52:35.163 ******** 2026-03-28 06:07:01.461787 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-28 06:07:01.461791 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-03-28 06:07:01.461796 | orchestrator | 2026-03-28 06:07:01.461800 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-28 06:07:01.461819 | orchestrator | Saturday 28 March 2026 06:06:30 +0000 (0:00:01.523) 0:52:36.686 ******** 2026-03-28 06:07:01.461823 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:07:01.461827 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:07:01.461831 | orchestrator | 2026-03-28 06:07:01.461836 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-28 06:07:01.461840 | orchestrator | Saturday 28 March 2026 06:06:39 +0000 (0:00:09.463) 0:52:46.150 ******** 2026-03-28 06:07:01.461844 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:07:01.461848 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:07:01.461852 | orchestrator | 2026-03-28 06:07:01.461856 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-28 06:07:01.461860 | orchestrator | Saturday 28 March 2026 06:06:43 +0000 (0:00:03.844) 0:52:49.995 ******** 2026-03-28 06:07:01.461865 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:07:01.461869 | orchestrator | 2026-03-28 06:07:01.461873 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-28 06:07:01.461877 | orchestrator | Saturday 28 March 2026 06:06:45 +0000 (0:00:02.168) 0:52:52.163 ******** 2026-03-28 06:07:01.461882 | orchestrator | changed: [testbed-node-0] 2026-03-28 06:07:01.461886 | orchestrator | 2026-03-28 06:07:01.461890 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-28 06:07:01.461894 | orchestrator | 2026-03-28 06:07:01.461898 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:07:01.461902 | orchestrator | Saturday 28 March 2026 06:06:47 +0000 (0:00:01.595) 0:52:53.759 ******** 2026-03-28 06:07:01.461907 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-28 06:07:01.461911 | orchestrator | 2026-03-28 06:07:01.461958 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:07:01.461964 | orchestrator | Saturday 28 March 2026 06:06:48 +0000 (0:00:01.135) 0:52:54.894 ******** 2026-03-28 06:07:01.461968 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.461972 | orchestrator | 2026-03-28 06:07:01.461977 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:07:01.461981 | orchestrator | Saturday 28 March 2026 06:06:49 +0000 (0:00:01.416) 0:52:56.311 ******** 2026-03-28 06:07:01.461985 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.461989 | orchestrator | 2026-03-28 06:07:01.461997 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:07:01.462001 | orchestrator | Saturday 28 March 2026 06:06:51 +0000 (0:00:01.178) 0:52:57.489 ******** 2026-03-28 06:07:01.462005 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462009 | orchestrator | 2026-03-28 06:07:01.462041 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:07:01.462047 | orchestrator | Saturday 28 March 2026 06:06:52 +0000 (0:00:01.519) 0:52:59.009 ******** 2026-03-28 06:07:01.462052 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462057 | orchestrator | 2026-03-28 06:07:01.462062 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:07:01.462067 | orchestrator | Saturday 28 March 2026 06:06:53 +0000 (0:00:01.209) 0:53:00.218 ******** 2026-03-28 06:07:01.462072 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462077 | orchestrator | 2026-03-28 06:07:01.462082 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:07:01.462088 | orchestrator | Saturday 28 March 2026 06:06:54 +0000 (0:00:01.129) 0:53:01.347 ******** 2026-03-28 06:07:01.462093 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462098 | orchestrator | 2026-03-28 06:07:01.462103 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:07:01.462108 | orchestrator | Saturday 28 March 2026 06:06:56 +0000 (0:00:01.163) 0:53:02.511 ******** 2026-03-28 06:07:01.462117 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:01.462122 | orchestrator | 2026-03-28 06:07:01.462127 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:07:01.462132 | orchestrator | Saturday 28 March 2026 06:06:57 +0000 (0:00:01.176) 0:53:03.687 ******** 2026-03-28 06:07:01.462137 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462142 | orchestrator | 2026-03-28 06:07:01.462147 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:07:01.462152 | orchestrator | Saturday 28 March 2026 06:06:58 +0000 (0:00:01.138) 0:53:04.826 ******** 2026-03-28 06:07:01.462157 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:07:01.462162 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:07:01.462167 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:07:01.462172 | orchestrator | 2026-03-28 06:07:01.462177 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:07:01.462182 | orchestrator | Saturday 28 March 2026 06:07:00 +0000 (0:00:01.752) 0:53:06.578 ******** 2026-03-28 06:07:01.462187 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:01.462192 | orchestrator | 2026-03-28 06:07:01.462202 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:07:27.357716 | orchestrator | Saturday 28 March 2026 06:07:01 +0000 (0:00:01.301) 0:53:07.879 ******** 2026-03-28 06:07:27.357806 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:07:27.357813 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:07:27.357818 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:07:27.357822 | orchestrator | 2026-03-28 06:07:27.357827 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:07:27.357831 | orchestrator | Saturday 28 March 2026 06:07:04 +0000 (0:00:02.859) 0:53:10.739 ******** 2026-03-28 06:07:27.357836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 06:07:27.357841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 06:07:27.357845 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 06:07:27.357849 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.357853 | orchestrator | 2026-03-28 06:07:27.357856 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:07:27.357860 | orchestrator | Saturday 28 March 2026 06:07:05 +0000 (0:00:01.490) 0:53:12.229 ******** 2026-03-28 06:07:27.357866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.357872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.357876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.357880 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.357884 | orchestrator | 2026-03-28 06:07:27.357888 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:07:27.357895 | orchestrator | Saturday 28 March 2026 06:07:07 +0000 (0:00:02.042) 0:53:14.271 ******** 2026-03-28 06:07:27.357971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.357998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.358002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:27.358006 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358010 | orchestrator | 2026-03-28 06:07:27.358049 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:07:27.358053 | orchestrator | Saturday 28 March 2026 06:07:09 +0000 (0:00:01.240) 0:53:15.511 ******** 2026-03-28 06:07:27.358060 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:07:01.984233', 'end': '2026-03-28 06:07:02.029528', 'delta': '0:00:00.045295', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:07:27.358078 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:07:02.526937', 'end': '2026-03-28 06:07:02.583632', 'delta': '0:00:00.056695', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:07:27.358083 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:07:03.113276', 'end': '2026-03-28 06:07:03.169345', 'delta': '0:00:00.056069', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:07:27.358087 | orchestrator | 2026-03-28 06:07:27.358091 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:07:27.358095 | orchestrator | Saturday 28 March 2026 06:07:10 +0000 (0:00:01.264) 0:53:16.776 ******** 2026-03-28 06:07:27.358098 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:27.358107 | orchestrator | 2026-03-28 06:07:27.358111 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:07:27.358114 | orchestrator | Saturday 28 March 2026 06:07:11 +0000 (0:00:01.340) 0:53:18.117 ******** 2026-03-28 06:07:27.358118 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358122 | orchestrator | 2026-03-28 06:07:27.358126 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:07:27.358130 | orchestrator | Saturday 28 March 2026 06:07:13 +0000 (0:00:01.727) 0:53:19.844 ******** 2026-03-28 06:07:27.358134 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:27.358137 | orchestrator | 2026-03-28 06:07:27.358141 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:07:27.358149 | orchestrator | Saturday 28 March 2026 06:07:14 +0000 (0:00:01.135) 0:53:20.980 ******** 2026-03-28 06:07:27.358153 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:07:27.358157 | orchestrator | 2026-03-28 06:07:27.358160 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:07:27.358164 | orchestrator | Saturday 28 March 2026 06:07:16 +0000 (0:00:02.038) 0:53:23.018 ******** 2026-03-28 06:07:27.358168 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:27.358172 | orchestrator | 2026-03-28 06:07:27.358176 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:07:27.358179 | orchestrator | Saturday 28 March 2026 06:07:17 +0000 (0:00:01.200) 0:53:24.218 ******** 2026-03-28 06:07:27.358183 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358187 | orchestrator | 2026-03-28 06:07:27.358191 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:07:27.358195 | orchestrator | Saturday 28 March 2026 06:07:18 +0000 (0:00:01.213) 0:53:25.432 ******** 2026-03-28 06:07:27.358198 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358202 | orchestrator | 2026-03-28 06:07:27.358206 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:07:27.358210 | orchestrator | Saturday 28 March 2026 06:07:20 +0000 (0:00:01.256) 0:53:26.688 ******** 2026-03-28 06:07:27.358213 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358217 | orchestrator | 2026-03-28 06:07:27.358221 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:07:27.358225 | orchestrator | Saturday 28 March 2026 06:07:21 +0000 (0:00:01.150) 0:53:27.839 ******** 2026-03-28 06:07:27.358229 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358232 | orchestrator | 2026-03-28 06:07:27.358236 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:07:27.358240 | orchestrator | Saturday 28 March 2026 06:07:22 +0000 (0:00:01.164) 0:53:29.004 ******** 2026-03-28 06:07:27.358244 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:27.358247 | orchestrator | 2026-03-28 06:07:27.358251 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:07:27.358255 | orchestrator | Saturday 28 March 2026 06:07:23 +0000 (0:00:01.180) 0:53:30.184 ******** 2026-03-28 06:07:27.358259 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358262 | orchestrator | 2026-03-28 06:07:27.358266 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:07:27.358270 | orchestrator | Saturday 28 March 2026 06:07:24 +0000 (0:00:01.185) 0:53:31.370 ******** 2026-03-28 06:07:27.358274 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:27.358277 | orchestrator | 2026-03-28 06:07:27.358281 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:07:27.358285 | orchestrator | Saturday 28 March 2026 06:07:26 +0000 (0:00:01.263) 0:53:32.634 ******** 2026-03-28 06:07:27.358289 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:27.358294 | orchestrator | 2026-03-28 06:07:28.740651 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:07:28.740761 | orchestrator | Saturday 28 March 2026 06:07:27 +0000 (0:00:01.145) 0:53:33.779 ******** 2026-03-28 06:07:28.740816 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:28.740838 | orchestrator | 2026-03-28 06:07:28.740857 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:07:28.740873 | orchestrator | Saturday 28 March 2026 06:07:28 +0000 (0:00:01.165) 0:53:34.945 ******** 2026-03-28 06:07:28.740895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}})  2026-03-28 06:07:28.741032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:07:28.741055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}})  2026-03-28 06:07:28.741131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:07:28.741210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}})  2026-03-28 06:07:28.741271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}})  2026-03-28 06:07:28.741285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:28.741314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:07:30.258274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:30.258388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:07:30.258402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:07:30.258413 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:30.258423 | orchestrator | 2026-03-28 06:07:30.258431 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:07:30.258440 | orchestrator | Saturday 28 March 2026 06:07:30 +0000 (0:00:01.523) 0:53:36.468 ******** 2026-03-28 06:07:30.258448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:30.258570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:07:35.643606 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:07:35.643619 | orchestrator | 2026-03-28 06:07:35.643632 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:07:35.643644 | orchestrator | Saturday 28 March 2026 06:07:31 +0000 (0:00:01.380) 0:53:37.849 ******** 2026-03-28 06:07:35.643655 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:35.643667 | orchestrator | 2026-03-28 06:07:35.643678 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:07:35.643688 | orchestrator | Saturday 28 March 2026 06:07:32 +0000 (0:00:01.527) 0:53:39.377 ******** 2026-03-28 06:07:35.643699 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:35.643710 | orchestrator | 2026-03-28 06:07:35.643721 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:07:35.643732 | orchestrator | Saturday 28 March 2026 06:07:34 +0000 (0:00:01.197) 0:53:40.574 ******** 2026-03-28 06:07:35.643745 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:07:35.643757 | orchestrator | 2026-03-28 06:07:35.643770 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:07:35.643791 | orchestrator | Saturday 28 March 2026 06:07:35 +0000 (0:00:01.496) 0:53:42.071 ******** 2026-03-28 06:08:17.502253 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502376 | orchestrator | 2026-03-28 06:08:17.502394 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:08:17.502408 | orchestrator | Saturday 28 March 2026 06:07:36 +0000 (0:00:01.123) 0:53:43.195 ******** 2026-03-28 06:08:17.502419 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502431 | orchestrator | 2026-03-28 06:08:17.502442 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:08:17.502470 | orchestrator | Saturday 28 March 2026 06:07:38 +0000 (0:00:01.319) 0:53:44.514 ******** 2026-03-28 06:08:17.502483 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502494 | orchestrator | 2026-03-28 06:08:17.502505 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:08:17.502540 | orchestrator | Saturday 28 March 2026 06:07:39 +0000 (0:00:01.174) 0:53:45.689 ******** 2026-03-28 06:08:17.502553 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 06:08:17.502564 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 06:08:17.502575 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 06:08:17.502586 | orchestrator | 2026-03-28 06:08:17.502597 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:08:17.502608 | orchestrator | Saturday 28 March 2026 06:07:41 +0000 (0:00:02.037) 0:53:47.727 ******** 2026-03-28 06:08:17.502619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 06:08:17.502631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 06:08:17.502642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 06:08:17.502653 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502663 | orchestrator | 2026-03-28 06:08:17.502675 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:08:17.502686 | orchestrator | Saturday 28 March 2026 06:07:42 +0000 (0:00:01.189) 0:53:48.916 ******** 2026-03-28 06:08:17.502697 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-28 06:08:17.502708 | orchestrator | 2026-03-28 06:08:17.502720 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:08:17.502733 | orchestrator | Saturday 28 March 2026 06:07:43 +0000 (0:00:01.150) 0:53:50.066 ******** 2026-03-28 06:08:17.502744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502758 | orchestrator | 2026-03-28 06:08:17.502772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:08:17.502785 | orchestrator | Saturday 28 March 2026 06:07:44 +0000 (0:00:01.143) 0:53:51.210 ******** 2026-03-28 06:08:17.502798 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502811 | orchestrator | 2026-03-28 06:08:17.502824 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:08:17.502837 | orchestrator | Saturday 28 March 2026 06:07:46 +0000 (0:00:01.232) 0:53:52.442 ******** 2026-03-28 06:08:17.502850 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.502863 | orchestrator | 2026-03-28 06:08:17.502877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:08:17.502926 | orchestrator | Saturday 28 March 2026 06:07:47 +0000 (0:00:01.218) 0:53:53.660 ******** 2026-03-28 06:08:17.502947 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.502968 | orchestrator | 2026-03-28 06:08:17.502987 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:08:17.503009 | orchestrator | Saturday 28 March 2026 06:07:48 +0000 (0:00:01.289) 0:53:54.950 ******** 2026-03-28 06:08:17.503028 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:08:17.503045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:08:17.503057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:08:17.503071 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.503084 | orchestrator | 2026-03-28 06:08:17.503097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:08:17.503109 | orchestrator | Saturday 28 March 2026 06:07:49 +0000 (0:00:01.473) 0:53:56.423 ******** 2026-03-28 06:08:17.503119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:08:17.503130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:08:17.503141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:08:17.503152 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.503163 | orchestrator | 2026-03-28 06:08:17.503173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:08:17.503184 | orchestrator | Saturday 28 March 2026 06:07:51 +0000 (0:00:01.434) 0:53:57.858 ******** 2026-03-28 06:08:17.503205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:08:17.503216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:08:17.503227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:08:17.503238 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.503249 | orchestrator | 2026-03-28 06:08:17.503260 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:08:17.503271 | orchestrator | Saturday 28 March 2026 06:07:52 +0000 (0:00:01.437) 0:53:59.296 ******** 2026-03-28 06:08:17.503282 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.503293 | orchestrator | 2026-03-28 06:08:17.503303 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:08:17.503314 | orchestrator | Saturday 28 March 2026 06:07:54 +0000 (0:00:01.143) 0:54:00.439 ******** 2026-03-28 06:08:17.503325 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:08:17.503336 | orchestrator | 2026-03-28 06:08:17.503347 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:08:17.503357 | orchestrator | Saturday 28 March 2026 06:07:55 +0000 (0:00:01.407) 0:54:01.847 ******** 2026-03-28 06:08:17.503387 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:08:17.503399 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:08:17.503410 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:08:17.503421 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:08:17.503432 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:08:17.503450 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:08:17.503461 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:08:17.503472 | orchestrator | 2026-03-28 06:08:17.503483 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:08:17.503494 | orchestrator | Saturday 28 March 2026 06:07:57 +0000 (0:00:02.291) 0:54:04.139 ******** 2026-03-28 06:08:17.503505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:08:17.503515 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:08:17.503526 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:08:17.503537 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:08:17.503547 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:08:17.503558 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:08:17.503569 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:08:17.503580 | orchestrator | 2026-03-28 06:08:17.503590 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-28 06:08:17.503601 | orchestrator | Saturday 28 March 2026 06:08:00 +0000 (0:00:02.751) 0:54:06.890 ******** 2026-03-28 06:08:17.503612 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.503623 | orchestrator | 2026-03-28 06:08:17.503634 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:08:17.503644 | orchestrator | Saturday 28 March 2026 06:08:01 +0000 (0:00:01.169) 0:54:08.059 ******** 2026-03-28 06:08:17.503655 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-28 06:08:17.503666 | orchestrator | 2026-03-28 06:08:17.503677 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:08:17.503688 | orchestrator | Saturday 28 March 2026 06:08:02 +0000 (0:00:01.161) 0:54:09.220 ******** 2026-03-28 06:08:17.503706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-28 06:08:17.503717 | orchestrator | 2026-03-28 06:08:17.503728 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:08:17.503738 | orchestrator | Saturday 28 March 2026 06:08:04 +0000 (0:00:01.326) 0:54:10.547 ******** 2026-03-28 06:08:17.503749 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.503760 | orchestrator | 2026-03-28 06:08:17.503771 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:08:17.503781 | orchestrator | Saturday 28 March 2026 06:08:05 +0000 (0:00:01.153) 0:54:11.700 ******** 2026-03-28 06:08:17.503792 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.503803 | orchestrator | 2026-03-28 06:08:17.503814 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:08:17.503825 | orchestrator | Saturday 28 March 2026 06:08:06 +0000 (0:00:01.482) 0:54:13.183 ******** 2026-03-28 06:08:17.503835 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.503846 | orchestrator | 2026-03-28 06:08:17.503857 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:08:17.503868 | orchestrator | Saturday 28 March 2026 06:08:08 +0000 (0:00:01.510) 0:54:14.693 ******** 2026-03-28 06:08:17.503882 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.503925 | orchestrator | 2026-03-28 06:08:17.503946 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:08:17.503965 | orchestrator | Saturday 28 March 2026 06:08:09 +0000 (0:00:01.507) 0:54:16.200 ******** 2026-03-28 06:08:17.503984 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.504004 | orchestrator | 2026-03-28 06:08:17.504022 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:08:17.504038 | orchestrator | Saturday 28 March 2026 06:08:10 +0000 (0:00:01.147) 0:54:17.348 ******** 2026-03-28 06:08:17.504049 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.504065 | orchestrator | 2026-03-28 06:08:17.504083 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:08:17.504101 | orchestrator | Saturday 28 March 2026 06:08:12 +0000 (0:00:01.215) 0:54:18.564 ******** 2026-03-28 06:08:17.504120 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.504138 | orchestrator | 2026-03-28 06:08:17.504193 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:08:17.504210 | orchestrator | Saturday 28 March 2026 06:08:13 +0000 (0:00:01.157) 0:54:19.721 ******** 2026-03-28 06:08:17.504221 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.504232 | orchestrator | 2026-03-28 06:08:17.504243 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:08:17.504253 | orchestrator | Saturday 28 March 2026 06:08:14 +0000 (0:00:01.507) 0:54:21.228 ******** 2026-03-28 06:08:17.504264 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:08:17.504275 | orchestrator | 2026-03-28 06:08:17.504285 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:08:17.504296 | orchestrator | Saturday 28 March 2026 06:08:16 +0000 (0:00:01.532) 0:54:22.761 ******** 2026-03-28 06:08:17.504307 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:08:17.504318 | orchestrator | 2026-03-28 06:08:17.504329 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:08:17.504350 | orchestrator | Saturday 28 March 2026 06:08:17 +0000 (0:00:01.161) 0:54:23.922 ******** 2026-03-28 06:09:06.448669 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448748 | orchestrator | 2026-03-28 06:09:06.448755 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:09:06.448761 | orchestrator | Saturday 28 March 2026 06:08:18 +0000 (0:00:01.122) 0:54:25.045 ******** 2026-03-28 06:09:06.448765 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.448770 | orchestrator | 2026-03-28 06:09:06.448774 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:09:06.448788 | orchestrator | Saturday 28 March 2026 06:08:19 +0000 (0:00:01.212) 0:54:26.258 ******** 2026-03-28 06:09:06.448806 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.448810 | orchestrator | 2026-03-28 06:09:06.448814 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:09:06.448817 | orchestrator | Saturday 28 March 2026 06:08:21 +0000 (0:00:01.189) 0:54:27.447 ******** 2026-03-28 06:09:06.448821 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.448825 | orchestrator | 2026-03-28 06:09:06.448829 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:09:06.448833 | orchestrator | Saturday 28 March 2026 06:08:22 +0000 (0:00:01.239) 0:54:28.686 ******** 2026-03-28 06:09:06.448837 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448841 | orchestrator | 2026-03-28 06:09:06.448844 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:09:06.448848 | orchestrator | Saturday 28 March 2026 06:08:23 +0000 (0:00:01.123) 0:54:29.810 ******** 2026-03-28 06:09:06.448852 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448856 | orchestrator | 2026-03-28 06:09:06.448860 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:09:06.448864 | orchestrator | Saturday 28 March 2026 06:08:24 +0000 (0:00:01.134) 0:54:30.945 ******** 2026-03-28 06:09:06.448867 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448871 | orchestrator | 2026-03-28 06:09:06.448875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:09:06.448879 | orchestrator | Saturday 28 March 2026 06:08:25 +0000 (0:00:01.173) 0:54:32.119 ******** 2026-03-28 06:09:06.448920 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.448924 | orchestrator | 2026-03-28 06:09:06.448928 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:09:06.448932 | orchestrator | Saturday 28 March 2026 06:08:26 +0000 (0:00:01.190) 0:54:33.310 ******** 2026-03-28 06:09:06.448935 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.448939 | orchestrator | 2026-03-28 06:09:06.448943 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:09:06.448947 | orchestrator | Saturday 28 March 2026 06:08:28 +0000 (0:00:01.142) 0:54:34.453 ******** 2026-03-28 06:09:06.448951 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448954 | orchestrator | 2026-03-28 06:09:06.448958 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:09:06.448962 | orchestrator | Saturday 28 March 2026 06:08:29 +0000 (0:00:01.113) 0:54:35.566 ******** 2026-03-28 06:09:06.448966 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448969 | orchestrator | 2026-03-28 06:09:06.448973 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:09:06.448977 | orchestrator | Saturday 28 March 2026 06:08:30 +0000 (0:00:01.173) 0:54:36.740 ******** 2026-03-28 06:09:06.448981 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.448984 | orchestrator | 2026-03-28 06:09:06.448989 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:09:06.448995 | orchestrator | Saturday 28 March 2026 06:08:31 +0000 (0:00:01.207) 0:54:37.947 ******** 2026-03-28 06:09:06.449001 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449007 | orchestrator | 2026-03-28 06:09:06.449014 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:09:06.449020 | orchestrator | Saturday 28 March 2026 06:08:32 +0000 (0:00:01.124) 0:54:39.072 ******** 2026-03-28 06:09:06.449026 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449032 | orchestrator | 2026-03-28 06:09:06.449038 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:09:06.449045 | orchestrator | Saturday 28 March 2026 06:08:33 +0000 (0:00:01.169) 0:54:40.241 ******** 2026-03-28 06:09:06.449051 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449057 | orchestrator | 2026-03-28 06:09:06.449063 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:09:06.449070 | orchestrator | Saturday 28 March 2026 06:08:34 +0000 (0:00:01.150) 0:54:41.392 ******** 2026-03-28 06:09:06.449079 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449083 | orchestrator | 2026-03-28 06:09:06.449087 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:09:06.449091 | orchestrator | Saturday 28 March 2026 06:08:36 +0000 (0:00:01.165) 0:54:42.557 ******** 2026-03-28 06:09:06.449095 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449099 | orchestrator | 2026-03-28 06:09:06.449103 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:09:06.449106 | orchestrator | Saturday 28 March 2026 06:08:37 +0000 (0:00:01.145) 0:54:43.703 ******** 2026-03-28 06:09:06.449110 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449114 | orchestrator | 2026-03-28 06:09:06.449118 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:09:06.449122 | orchestrator | Saturday 28 March 2026 06:08:38 +0000 (0:00:01.163) 0:54:44.867 ******** 2026-03-28 06:09:06.449125 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449129 | orchestrator | 2026-03-28 06:09:06.449133 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:09:06.449137 | orchestrator | Saturday 28 March 2026 06:08:39 +0000 (0:00:01.135) 0:54:46.003 ******** 2026-03-28 06:09:06.449140 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449144 | orchestrator | 2026-03-28 06:09:06.449148 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:09:06.449152 | orchestrator | Saturday 28 March 2026 06:08:40 +0000 (0:00:01.151) 0:54:47.154 ******** 2026-03-28 06:09:06.449156 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449159 | orchestrator | 2026-03-28 06:09:06.449174 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:09:06.449178 | orchestrator | Saturday 28 March 2026 06:08:41 +0000 (0:00:01.132) 0:54:48.287 ******** 2026-03-28 06:09:06.449182 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.449186 | orchestrator | 2026-03-28 06:09:06.449190 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:09:06.449194 | orchestrator | Saturday 28 March 2026 06:08:43 +0000 (0:00:01.987) 0:54:50.275 ******** 2026-03-28 06:09:06.449201 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.449205 | orchestrator | 2026-03-28 06:09:06.449209 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:09:06.449212 | orchestrator | Saturday 28 March 2026 06:08:46 +0000 (0:00:02.185) 0:54:52.461 ******** 2026-03-28 06:09:06.449216 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-28 06:09:06.449221 | orchestrator | 2026-03-28 06:09:06.449225 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:09:06.449229 | orchestrator | Saturday 28 March 2026 06:08:47 +0000 (0:00:01.130) 0:54:53.591 ******** 2026-03-28 06:09:06.449233 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449237 | orchestrator | 2026-03-28 06:09:06.449242 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:09:06.449246 | orchestrator | Saturday 28 March 2026 06:08:48 +0000 (0:00:01.143) 0:54:54.735 ******** 2026-03-28 06:09:06.449250 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449255 | orchestrator | 2026-03-28 06:09:06.449260 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:09:06.449264 | orchestrator | Saturday 28 March 2026 06:08:49 +0000 (0:00:01.286) 0:54:56.021 ******** 2026-03-28 06:09:06.449269 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:09:06.449273 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:09:06.449278 | orchestrator | 2026-03-28 06:09:06.449282 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:09:06.449287 | orchestrator | Saturday 28 March 2026 06:08:51 +0000 (0:00:01.780) 0:54:57.801 ******** 2026-03-28 06:09:06.449295 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.449299 | orchestrator | 2026-03-28 06:09:06.449304 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:09:06.449308 | orchestrator | Saturday 28 March 2026 06:08:52 +0000 (0:00:01.491) 0:54:59.292 ******** 2026-03-28 06:09:06.449312 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449317 | orchestrator | 2026-03-28 06:09:06.449321 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:09:06.449326 | orchestrator | Saturday 28 March 2026 06:08:54 +0000 (0:00:01.148) 0:55:00.441 ******** 2026-03-28 06:09:06.449330 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449334 | orchestrator | 2026-03-28 06:09:06.449339 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:09:06.449343 | orchestrator | Saturday 28 March 2026 06:08:55 +0000 (0:00:01.204) 0:55:01.646 ******** 2026-03-28 06:09:06.449348 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449352 | orchestrator | 2026-03-28 06:09:06.449357 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:09:06.449361 | orchestrator | Saturday 28 March 2026 06:08:56 +0000 (0:00:01.117) 0:55:02.764 ******** 2026-03-28 06:09:06.449365 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-28 06:09:06.449370 | orchestrator | 2026-03-28 06:09:06.449375 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:09:06.449379 | orchestrator | Saturday 28 March 2026 06:08:57 +0000 (0:00:01.126) 0:55:03.890 ******** 2026-03-28 06:09:06.449383 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:06.449388 | orchestrator | 2026-03-28 06:09:06.449392 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:09:06.449397 | orchestrator | Saturday 28 March 2026 06:08:59 +0000 (0:00:01.768) 0:55:05.658 ******** 2026-03-28 06:09:06.449401 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:09:06.449406 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:09:06.449410 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:09:06.449415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449419 | orchestrator | 2026-03-28 06:09:06.449424 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:09:06.449428 | orchestrator | Saturday 28 March 2026 06:09:00 +0000 (0:00:01.265) 0:55:06.924 ******** 2026-03-28 06:09:06.449433 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449437 | orchestrator | 2026-03-28 06:09:06.449442 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:09:06.449446 | orchestrator | Saturday 28 March 2026 06:09:01 +0000 (0:00:01.171) 0:55:08.096 ******** 2026-03-28 06:09:06.449451 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449455 | orchestrator | 2026-03-28 06:09:06.449460 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:09:06.449463 | orchestrator | Saturday 28 March 2026 06:09:02 +0000 (0:00:01.174) 0:55:09.270 ******** 2026-03-28 06:09:06.449467 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449471 | orchestrator | 2026-03-28 06:09:06.449475 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:09:06.449478 | orchestrator | Saturday 28 March 2026 06:09:03 +0000 (0:00:01.154) 0:55:10.424 ******** 2026-03-28 06:09:06.449482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449486 | orchestrator | 2026-03-28 06:09:06.449490 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:09:06.449493 | orchestrator | Saturday 28 March 2026 06:09:05 +0000 (0:00:01.304) 0:55:11.729 ******** 2026-03-28 06:09:06.449497 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:06.449501 | orchestrator | 2026-03-28 06:09:06.449507 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:09:57.592148 | orchestrator | Saturday 28 March 2026 06:09:06 +0000 (0:00:01.140) 0:55:12.869 ******** 2026-03-28 06:09:57.592249 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:57.592261 | orchestrator | 2026-03-28 06:09:57.592270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:09:57.592278 | orchestrator | Saturday 28 March 2026 06:09:08 +0000 (0:00:02.489) 0:55:15.359 ******** 2026-03-28 06:09:57.592300 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:57.592308 | orchestrator | 2026-03-28 06:09:57.592316 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:09:57.592323 | orchestrator | Saturday 28 March 2026 06:09:10 +0000 (0:00:01.142) 0:55:16.501 ******** 2026-03-28 06:09:57.592331 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-28 06:09:57.592339 | orchestrator | 2026-03-28 06:09:57.592347 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:09:57.592354 | orchestrator | Saturday 28 March 2026 06:09:11 +0000 (0:00:01.138) 0:55:17.640 ******** 2026-03-28 06:09:57.592362 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592370 | orchestrator | 2026-03-28 06:09:57.592378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:09:57.592385 | orchestrator | Saturday 28 March 2026 06:09:12 +0000 (0:00:01.171) 0:55:18.811 ******** 2026-03-28 06:09:57.592393 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592400 | orchestrator | 2026-03-28 06:09:57.592407 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:09:57.592414 | orchestrator | Saturday 28 March 2026 06:09:13 +0000 (0:00:01.165) 0:55:19.977 ******** 2026-03-28 06:09:57.592422 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592429 | orchestrator | 2026-03-28 06:09:57.592436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:09:57.592444 | orchestrator | Saturday 28 March 2026 06:09:14 +0000 (0:00:01.154) 0:55:21.132 ******** 2026-03-28 06:09:57.592451 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592458 | orchestrator | 2026-03-28 06:09:57.592466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:09:57.592473 | orchestrator | Saturday 28 March 2026 06:09:15 +0000 (0:00:01.172) 0:55:22.304 ******** 2026-03-28 06:09:57.592480 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592487 | orchestrator | 2026-03-28 06:09:57.592495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:09:57.592502 | orchestrator | Saturday 28 March 2026 06:09:17 +0000 (0:00:01.146) 0:55:23.451 ******** 2026-03-28 06:09:57.592509 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592517 | orchestrator | 2026-03-28 06:09:57.592524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:09:57.592531 | orchestrator | Saturday 28 March 2026 06:09:18 +0000 (0:00:01.197) 0:55:24.649 ******** 2026-03-28 06:09:57.592538 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592546 | orchestrator | 2026-03-28 06:09:57.592553 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:09:57.592560 | orchestrator | Saturday 28 March 2026 06:09:19 +0000 (0:00:01.183) 0:55:25.832 ******** 2026-03-28 06:09:57.592568 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.592575 | orchestrator | 2026-03-28 06:09:57.592582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:09:57.592590 | orchestrator | Saturday 28 March 2026 06:09:20 +0000 (0:00:01.326) 0:55:27.158 ******** 2026-03-28 06:09:57.592597 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:09:57.592604 | orchestrator | 2026-03-28 06:09:57.592612 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:09:57.592619 | orchestrator | Saturday 28 March 2026 06:09:21 +0000 (0:00:01.153) 0:55:28.311 ******** 2026-03-28 06:09:57.592626 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-28 06:09:57.592652 | orchestrator | 2026-03-28 06:09:57.592660 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:09:57.592667 | orchestrator | Saturday 28 March 2026 06:09:22 +0000 (0:00:01.104) 0:55:29.416 ******** 2026-03-28 06:09:57.592675 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 06:09:57.592682 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 06:09:57.592691 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 06:09:57.592700 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 06:09:57.592708 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 06:09:57.592717 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 06:09:57.592725 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 06:09:57.592734 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:09:57.592742 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:09:57.592751 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:09:57.592760 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:09:57.592769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:09:57.592778 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:09:57.592786 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:09:57.592795 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 06:09:57.592804 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 06:09:57.592812 | orchestrator | 2026-03-28 06:09:57.592820 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:09:57.592829 | orchestrator | Saturday 28 March 2026 06:09:29 +0000 (0:00:06.580) 0:55:35.996 ******** 2026-03-28 06:09:57.592837 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-28 06:09:57.592846 | orchestrator | 2026-03-28 06:09:57.592900 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:09:57.592912 | orchestrator | Saturday 28 March 2026 06:09:30 +0000 (0:00:01.154) 0:55:37.150 ******** 2026-03-28 06:09:57.592931 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:09:57.592944 | orchestrator | 2026-03-28 06:09:57.592956 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:09:57.592969 | orchestrator | Saturday 28 March 2026 06:09:32 +0000 (0:00:01.544) 0:55:38.694 ******** 2026-03-28 06:09:57.592981 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:09:57.592994 | orchestrator | 2026-03-28 06:09:57.593002 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:09:57.593009 | orchestrator | Saturday 28 March 2026 06:09:34 +0000 (0:00:01.997) 0:55:40.692 ******** 2026-03-28 06:09:57.593016 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593024 | orchestrator | 2026-03-28 06:09:57.593031 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:09:57.593038 | orchestrator | Saturday 28 March 2026 06:09:35 +0000 (0:00:01.208) 0:55:41.901 ******** 2026-03-28 06:09:57.593046 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593053 | orchestrator | 2026-03-28 06:09:57.593060 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:09:57.593067 | orchestrator | Saturday 28 March 2026 06:09:36 +0000 (0:00:01.208) 0:55:43.109 ******** 2026-03-28 06:09:57.593075 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593082 | orchestrator | 2026-03-28 06:09:57.593090 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:09:57.593097 | orchestrator | Saturday 28 March 2026 06:09:37 +0000 (0:00:01.134) 0:55:44.244 ******** 2026-03-28 06:09:57.593111 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593119 | orchestrator | 2026-03-28 06:09:57.593126 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:09:57.593134 | orchestrator | Saturday 28 March 2026 06:09:38 +0000 (0:00:01.161) 0:55:45.405 ******** 2026-03-28 06:09:57.593143 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593152 | orchestrator | 2026-03-28 06:09:57.593161 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:09:57.593170 | orchestrator | Saturday 28 March 2026 06:09:40 +0000 (0:00:01.167) 0:55:46.573 ******** 2026-03-28 06:09:57.593178 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593187 | orchestrator | 2026-03-28 06:09:57.593196 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:09:57.593205 | orchestrator | Saturday 28 March 2026 06:09:41 +0000 (0:00:01.143) 0:55:47.716 ******** 2026-03-28 06:09:57.593214 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593222 | orchestrator | 2026-03-28 06:09:57.593231 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:09:57.593240 | orchestrator | Saturday 28 March 2026 06:09:42 +0000 (0:00:01.139) 0:55:48.855 ******** 2026-03-28 06:09:57.593249 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593258 | orchestrator | 2026-03-28 06:09:57.593266 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:09:57.593275 | orchestrator | Saturday 28 March 2026 06:09:43 +0000 (0:00:01.162) 0:55:50.018 ******** 2026-03-28 06:09:57.593284 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593293 | orchestrator | 2026-03-28 06:09:57.593301 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:09:57.593310 | orchestrator | Saturday 28 March 2026 06:09:44 +0000 (0:00:01.139) 0:55:51.158 ******** 2026-03-28 06:09:57.593319 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593328 | orchestrator | 2026-03-28 06:09:57.593337 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:09:57.593345 | orchestrator | Saturday 28 March 2026 06:09:45 +0000 (0:00:01.183) 0:55:52.342 ******** 2026-03-28 06:09:57.593354 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:09:57.593363 | orchestrator | 2026-03-28 06:09:57.593372 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:09:57.593380 | orchestrator | Saturday 28 March 2026 06:09:47 +0000 (0:00:01.172) 0:55:53.515 ******** 2026-03-28 06:09:57.593389 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:09:57.593398 | orchestrator | 2026-03-28 06:09:57.593407 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:09:57.593415 | orchestrator | Saturday 28 March 2026 06:09:51 +0000 (0:00:04.424) 0:55:57.939 ******** 2026-03-28 06:09:57.593424 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:09:57.593433 | orchestrator | 2026-03-28 06:09:57.593442 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:09:57.593450 | orchestrator | Saturday 28 March 2026 06:09:52 +0000 (0:00:01.215) 0:55:59.154 ******** 2026-03-28 06:09:57.593461 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-28 06:09:57.593480 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-28 06:10:56.082551 | orchestrator | 2026-03-28 06:10:56.082670 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:10:56.082704 | orchestrator | Saturday 28 March 2026 06:09:57 +0000 (0:00:04.859) 0:56:04.014 ******** 2026-03-28 06:10:56.082717 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.082730 | orchestrator | 2026-03-28 06:10:56.082742 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:10:56.082754 | orchestrator | Saturday 28 March 2026 06:09:58 +0000 (0:00:01.216) 0:56:05.230 ******** 2026-03-28 06:10:56.082765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.082777 | orchestrator | 2026-03-28 06:10:56.082790 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:10:56.082802 | orchestrator | Saturday 28 March 2026 06:09:59 +0000 (0:00:01.163) 0:56:06.394 ******** 2026-03-28 06:10:56.082813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.082824 | orchestrator | 2026-03-28 06:10:56.082835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:10:56.082846 | orchestrator | Saturday 28 March 2026 06:10:01 +0000 (0:00:01.278) 0:56:07.673 ******** 2026-03-28 06:10:56.082857 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.082939 | orchestrator | 2026-03-28 06:10:56.082950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:10:56.082962 | orchestrator | Saturday 28 March 2026 06:10:02 +0000 (0:00:01.171) 0:56:08.845 ******** 2026-03-28 06:10:56.082974 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.082985 | orchestrator | 2026-03-28 06:10:56.082997 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:10:56.083008 | orchestrator | Saturday 28 March 2026 06:10:03 +0000 (0:00:01.216) 0:56:10.061 ******** 2026-03-28 06:10:56.083019 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.083032 | orchestrator | 2026-03-28 06:10:56.083043 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:10:56.083054 | orchestrator | Saturday 28 March 2026 06:10:04 +0000 (0:00:01.250) 0:56:11.312 ******** 2026-03-28 06:10:56.083066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:10:56.083078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:10:56.083091 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:10:56.083104 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.083117 | orchestrator | 2026-03-28 06:10:56.083131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:10:56.083144 | orchestrator | Saturday 28 March 2026 06:10:06 +0000 (0:00:01.468) 0:56:12.780 ******** 2026-03-28 06:10:56.083157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:10:56.083171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:10:56.083184 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:10:56.083197 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.083209 | orchestrator | 2026-03-28 06:10:56.083222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:10:56.083236 | orchestrator | Saturday 28 March 2026 06:10:07 +0000 (0:00:01.527) 0:56:14.307 ******** 2026-03-28 06:10:56.083250 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:10:56.083263 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:10:56.083276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:10:56.083288 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.083301 | orchestrator | 2026-03-28 06:10:56.083314 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:10:56.083327 | orchestrator | Saturday 28 March 2026 06:10:09 +0000 (0:00:01.359) 0:56:15.666 ******** 2026-03-28 06:10:56.083340 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.083374 | orchestrator | 2026-03-28 06:10:56.083388 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:10:56.083401 | orchestrator | Saturday 28 March 2026 06:10:10 +0000 (0:00:01.224) 0:56:16.891 ******** 2026-03-28 06:10:56.083414 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:10:56.083426 | orchestrator | 2026-03-28 06:10:56.083440 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:10:56.083451 | orchestrator | Saturday 28 March 2026 06:10:11 +0000 (0:00:01.337) 0:56:18.229 ******** 2026-03-28 06:10:56.083463 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.083474 | orchestrator | 2026-03-28 06:10:56.083485 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 06:10:56.083496 | orchestrator | Saturday 28 March 2026 06:10:13 +0000 (0:00:01.726) 0:56:19.955 ******** 2026-03-28 06:10:56.083508 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.083519 | orchestrator | 2026-03-28 06:10:56.083530 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 06:10:56.083541 | orchestrator | Saturday 28 March 2026 06:10:14 +0000 (0:00:01.129) 0:56:21.085 ******** 2026-03-28 06:10:56.083552 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-03-28 06:10:56.083564 | orchestrator | 2026-03-28 06:10:56.083575 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 06:10:56.083586 | orchestrator | Saturday 28 March 2026 06:10:16 +0000 (0:00:01.611) 0:56:22.696 ******** 2026-03-28 06:10:56.083598 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 06:10:56.083609 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-28 06:10:56.083620 | orchestrator | 2026-03-28 06:10:56.083632 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 06:10:56.083643 | orchestrator | Saturday 28 March 2026 06:10:18 +0000 (0:00:01.839) 0:56:24.536 ******** 2026-03-28 06:10:56.083654 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:10:56.083666 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:10:56.083695 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:10:56.083706 | orchestrator | 2026-03-28 06:10:56.083724 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:10:56.083735 | orchestrator | Saturday 28 March 2026 06:10:21 +0000 (0:00:03.252) 0:56:27.789 ******** 2026-03-28 06:10:56.083747 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-28 06:10:56.083758 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:10:56.083769 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.083781 | orchestrator | 2026-03-28 06:10:56.083792 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 06:10:56.083803 | orchestrator | Saturday 28 March 2026 06:10:23 +0000 (0:00:01.915) 0:56:29.705 ******** 2026-03-28 06:10:56.083815 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.083826 | orchestrator | 2026-03-28 06:10:56.083837 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 06:10:56.083848 | orchestrator | Saturday 28 March 2026 06:10:24 +0000 (0:00:01.522) 0:56:31.227 ******** 2026-03-28 06:10:56.083876 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.083887 | orchestrator | 2026-03-28 06:10:56.083899 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 06:10:56.083910 | orchestrator | Saturday 28 March 2026 06:10:25 +0000 (0:00:01.153) 0:56:32.380 ******** 2026-03-28 06:10:56.083921 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-03-28 06:10:56.083933 | orchestrator | 2026-03-28 06:10:56.083944 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 06:10:56.083956 | orchestrator | Saturday 28 March 2026 06:10:27 +0000 (0:00:01.492) 0:56:33.873 ******** 2026-03-28 06:10:56.083967 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-03-28 06:10:56.083986 | orchestrator | 2026-03-28 06:10:56.083998 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 06:10:56.084009 | orchestrator | Saturday 28 March 2026 06:10:28 +0000 (0:00:01.513) 0:56:35.386 ******** 2026-03-28 06:10:56.084020 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084031 | orchestrator | 2026-03-28 06:10:56.084042 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 06:10:56.084053 | orchestrator | Saturday 28 March 2026 06:10:30 +0000 (0:00:02.034) 0:56:37.421 ******** 2026-03-28 06:10:56.084069 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084089 | orchestrator | 2026-03-28 06:10:56.084107 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 06:10:56.084126 | orchestrator | Saturday 28 March 2026 06:10:32 +0000 (0:00:01.985) 0:56:39.407 ******** 2026-03-28 06:10:56.084145 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084164 | orchestrator | 2026-03-28 06:10:56.084183 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 06:10:56.084202 | orchestrator | Saturday 28 March 2026 06:10:35 +0000 (0:00:02.274) 0:56:41.681 ******** 2026-03-28 06:10:56.084219 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084237 | orchestrator | 2026-03-28 06:10:56.084259 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 06:10:56.084284 | orchestrator | Saturday 28 March 2026 06:10:37 +0000 (0:00:02.368) 0:56:44.050 ******** 2026-03-28 06:10:56.084303 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084321 | orchestrator | 2026-03-28 06:10:56.084340 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-28 06:10:56.084360 | orchestrator | Saturday 28 March 2026 06:10:39 +0000 (0:00:01.617) 0:56:45.668 ******** 2026-03-28 06:10:56.084379 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:10:56.084398 | orchestrator | 2026-03-28 06:10:56.084410 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-28 06:10:56.084421 | orchestrator | Saturday 28 March 2026 06:10:40 +0000 (0:00:01.126) 0:56:46.795 ******** 2026-03-28 06:10:56.084432 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:10:56.084443 | orchestrator | 2026-03-28 06:10:56.084455 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-28 06:10:56.084466 | orchestrator | 2026-03-28 06:10:56.084477 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:10:56.084488 | orchestrator | Saturday 28 March 2026 06:10:47 +0000 (0:00:07.183) 0:56:53.978 ******** 2026-03-28 06:10:56.084499 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-3 2026-03-28 06:10:56.084510 | orchestrator | 2026-03-28 06:10:56.084521 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:10:56.084532 | orchestrator | Saturday 28 March 2026 06:10:48 +0000 (0:00:01.245) 0:56:55.224 ******** 2026-03-28 06:10:56.084543 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:10:56.084554 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:10:56.084565 | orchestrator | 2026-03-28 06:10:56.084576 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:10:56.084587 | orchestrator | Saturday 28 March 2026 06:10:50 +0000 (0:00:01.550) 0:56:56.774 ******** 2026-03-28 06:10:56.084598 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:10:56.084609 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:10:56.084620 | orchestrator | 2026-03-28 06:10:56.084631 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:10:56.084642 | orchestrator | Saturday 28 March 2026 06:10:51 +0000 (0:00:01.607) 0:56:58.382 ******** 2026-03-28 06:10:56.084653 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:10:56.084664 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:10:56.084675 | orchestrator | 2026-03-28 06:10:56.084686 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:10:56.084697 | orchestrator | Saturday 28 March 2026 06:10:53 +0000 (0:00:01.591) 0:56:59.973 ******** 2026-03-28 06:10:56.084719 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:10:56.084730 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:10:56.084741 | orchestrator | 2026-03-28 06:10:56.084752 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:10:56.084763 | orchestrator | Saturday 28 March 2026 06:10:54 +0000 (0:00:01.229) 0:57:01.203 ******** 2026-03-28 06:10:56.084774 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:10:56.084797 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.442723 | orchestrator | 2026-03-28 06:11:19.442841 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:11:19.442993 | orchestrator | Saturday 28 March 2026 06:10:56 +0000 (0:00:01.300) 0:57:02.503 ******** 2026-03-28 06:11:19.443012 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.443025 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.443036 | orchestrator | 2026-03-28 06:11:19.443060 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:11:19.443073 | orchestrator | Saturday 28 March 2026 06:10:57 +0000 (0:00:01.335) 0:57:03.839 ******** 2026-03-28 06:11:19.443095 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:19.443108 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:19.443119 | orchestrator | 2026-03-28 06:11:19.443130 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:11:19.443141 | orchestrator | Saturday 28 March 2026 06:10:58 +0000 (0:00:01.278) 0:57:05.117 ******** 2026-03-28 06:11:19.443152 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.443163 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.443173 | orchestrator | 2026-03-28 06:11:19.443184 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:11:19.443195 | orchestrator | Saturday 28 March 2026 06:10:59 +0000 (0:00:01.281) 0:57:06.399 ******** 2026-03-28 06:11:19.443206 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:11:19.443218 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:11:19.443228 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:11:19.443239 | orchestrator | 2026-03-28 06:11:19.443250 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:11:19.443263 | orchestrator | Saturday 28 March 2026 06:11:01 +0000 (0:00:01.818) 0:57:08.218 ******** 2026-03-28 06:11:19.443276 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.443288 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.443300 | orchestrator | 2026-03-28 06:11:19.443313 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:11:19.443325 | orchestrator | Saturday 28 March 2026 06:11:03 +0000 (0:00:01.461) 0:57:09.679 ******** 2026-03-28 06:11:19.443338 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:11:19.443350 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:11:19.443363 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:11:19.443376 | orchestrator | 2026-03-28 06:11:19.443388 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:11:19.443401 | orchestrator | Saturday 28 March 2026 06:11:06 +0000 (0:00:02.967) 0:57:12.647 ******** 2026-03-28 06:11:19.443414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 06:11:19.443426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 06:11:19.443439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 06:11:19.443451 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:19.443464 | orchestrator | 2026-03-28 06:11:19.443477 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:11:19.443490 | orchestrator | Saturday 28 March 2026 06:11:07 +0000 (0:00:01.567) 0:57:14.214 ******** 2026-03-28 06:11:19.443527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443564 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:19.443575 | orchestrator | 2026-03-28 06:11:19.443586 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:11:19.443597 | orchestrator | Saturday 28 March 2026 06:11:09 +0000 (0:00:01.670) 0:57:15.885 ******** 2026-03-28 06:11:19.443611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:19.443675 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:19.443687 | orchestrator | 2026-03-28 06:11:19.443698 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:11:19.443709 | orchestrator | Saturday 28 March 2026 06:11:10 +0000 (0:00:01.192) 0:57:17.077 ******** 2026-03-28 06:11:19.443722 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:11:03.807924', 'end': '2026-03-28 06:11:03.865044', 'delta': '0:00:00.057120', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:11:19.443737 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:11:04.385120', 'end': '2026-03-28 06:11:04.428323', 'delta': '0:00:00.043203', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:11:19.443758 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:11:04.984319', 'end': '2026-03-28 06:11:05.045696', 'delta': '0:00:00.061377', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:11:19.443770 | orchestrator | 2026-03-28 06:11:19.443781 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:11:19.443792 | orchestrator | Saturday 28 March 2026 06:11:11 +0000 (0:00:01.231) 0:57:18.309 ******** 2026-03-28 06:11:19.443803 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.443814 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.443825 | orchestrator | 2026-03-28 06:11:19.443836 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:11:19.443847 | orchestrator | Saturday 28 March 2026 06:11:13 +0000 (0:00:01.446) 0:57:19.756 ******** 2026-03-28 06:11:19.443886 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:19.443898 | orchestrator | 2026-03-28 06:11:19.443909 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:11:19.443920 | orchestrator | Saturday 28 March 2026 06:11:14 +0000 (0:00:01.365) 0:57:21.121 ******** 2026-03-28 06:11:19.443931 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.443942 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:19.443953 | orchestrator | 2026-03-28 06:11:19.443963 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:11:19.443974 | orchestrator | Saturday 28 March 2026 06:11:16 +0000 (0:00:01.331) 0:57:22.453 ******** 2026-03-28 06:11:19.443985 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:11:19.443996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:11:19.444007 | orchestrator | 2026-03-28 06:11:19.444018 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:11:19.444029 | orchestrator | Saturday 28 March 2026 06:11:18 +0000 (0:00:02.147) 0:57:24.600 ******** 2026-03-28 06:11:19.444040 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:19.444058 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:31.058728 | orchestrator | 2026-03-28 06:11:31.058847 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:11:31.058969 | orchestrator | Saturday 28 March 2026 06:11:19 +0000 (0:00:01.265) 0:57:25.865 ******** 2026-03-28 06:11:31.058984 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.058997 | orchestrator | 2026-03-28 06:11:31.059008 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:11:31.059019 | orchestrator | Saturday 28 March 2026 06:11:20 +0000 (0:00:01.130) 0:57:26.996 ******** 2026-03-28 06:11:31.059031 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.059042 | orchestrator | 2026-03-28 06:11:31.059053 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:11:31.059064 | orchestrator | Saturday 28 March 2026 06:11:21 +0000 (0:00:01.209) 0:57:28.206 ******** 2026-03-28 06:11:31.059075 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.059086 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:31.059097 | orchestrator | 2026-03-28 06:11:31.059108 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:11:31.059143 | orchestrator | Saturday 28 March 2026 06:11:23 +0000 (0:00:01.302) 0:57:29.509 ******** 2026-03-28 06:11:31.059154 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.059165 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:31.059176 | orchestrator | 2026-03-28 06:11:31.059187 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:11:31.059199 | orchestrator | Saturday 28 March 2026 06:11:24 +0000 (0:00:01.224) 0:57:30.733 ******** 2026-03-28 06:11:31.059210 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:31.059222 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:31.059232 | orchestrator | 2026-03-28 06:11:31.059243 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:11:31.059254 | orchestrator | Saturday 28 March 2026 06:11:25 +0000 (0:00:01.316) 0:57:32.050 ******** 2026-03-28 06:11:31.059265 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.059276 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:31.059287 | orchestrator | 2026-03-28 06:11:31.059297 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:11:31.059308 | orchestrator | Saturday 28 March 2026 06:11:26 +0000 (0:00:01.308) 0:57:33.358 ******** 2026-03-28 06:11:31.059319 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:31.059330 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:31.059341 | orchestrator | 2026-03-28 06:11:31.059352 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:11:31.059362 | orchestrator | Saturday 28 March 2026 06:11:28 +0000 (0:00:01.317) 0:57:34.676 ******** 2026-03-28 06:11:31.059373 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:31.059384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:31.059395 | orchestrator | 2026-03-28 06:11:31.059406 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:11:31.059417 | orchestrator | Saturday 28 March 2026 06:11:29 +0000 (0:00:01.251) 0:57:35.927 ******** 2026-03-28 06:11:31.059428 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:11:31.059439 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:11:31.059449 | orchestrator | 2026-03-28 06:11:31.059460 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:11:31.059471 | orchestrator | Saturday 28 March 2026 06:11:30 +0000 (0:00:01.317) 0:57:37.245 ******** 2026-03-28 06:11:31.059484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}})  2026-03-28 06:11:31.059516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:11:31.059561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}})  2026-03-28 06:11:31.059576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:11:31.059612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.059661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}})  2026-03-28 06:11:31.177221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}})  2026-03-28 06:11:31.177347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}})  2026-03-28 06:11:31.177360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:11:31.177429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:11:31.177467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}})  2026-03-28 06:11:31.177480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:31.177547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:11:32.414208 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:32.414231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}})  2026-03-28 06:11:32.414290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}})  2026-03-28 06:11:32.414344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:11:32.414402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:11:32.414452 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:11:32.414466 | orchestrator | 2026-03-28 06:11:32.414480 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:11:32.414495 | orchestrator | Saturday 28 March 2026 06:11:32 +0000 (0:00:01.462) 0:57:38.707 ******** 2026-03-28 06:11:32.414526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.542724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.542826 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.542843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.542944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.542974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543030 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.543136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647837 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.647994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:32.648016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861156 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:11:33.861263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861357 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:11:33.861453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:12:01.866586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:12:01.866730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:12:01.866749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:12:01.866762 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.866775 | orchestrator | 2026-03-28 06:12:01.866788 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:12:01.866800 | orchestrator | Saturday 28 March 2026 06:11:33 +0000 (0:00:01.579) 0:57:40.286 ******** 2026-03-28 06:12:01.866811 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:01.866823 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:01.866834 | orchestrator | 2026-03-28 06:12:01.866845 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:12:01.866960 | orchestrator | Saturday 28 March 2026 06:11:35 +0000 (0:00:01.689) 0:57:41.976 ******** 2026-03-28 06:12:01.866972 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:01.866983 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:01.866994 | orchestrator | 2026-03-28 06:12:01.867006 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:12:01.867017 | orchestrator | Saturday 28 March 2026 06:11:36 +0000 (0:00:01.221) 0:57:43.198 ******** 2026-03-28 06:12:01.867028 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:01.867039 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:01.867051 | orchestrator | 2026-03-28 06:12:01.867078 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:12:01.867090 | orchestrator | Saturday 28 March 2026 06:11:38 +0000 (0:00:01.673) 0:57:44.871 ******** 2026-03-28 06:12:01.867101 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867114 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867127 | orchestrator | 2026-03-28 06:12:01.867140 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:12:01.867154 | orchestrator | Saturday 28 March 2026 06:11:39 +0000 (0:00:01.275) 0:57:46.147 ******** 2026-03-28 06:12:01.867167 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867180 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867194 | orchestrator | 2026-03-28 06:12:01.867206 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:12:01.867220 | orchestrator | Saturday 28 March 2026 06:11:41 +0000 (0:00:01.381) 0:57:47.528 ******** 2026-03-28 06:12:01.867233 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867246 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867259 | orchestrator | 2026-03-28 06:12:01.867273 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:12:01.867295 | orchestrator | Saturday 28 March 2026 06:11:42 +0000 (0:00:01.285) 0:57:48.813 ******** 2026-03-28 06:12:01.867308 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 06:12:01.867322 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 06:12:01.867335 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 06:12:01.867348 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 06:12:01.867361 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 06:12:01.867374 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 06:12:01.867387 | orchestrator | 2026-03-28 06:12:01.867403 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:12:01.867425 | orchestrator | Saturday 28 March 2026 06:11:44 +0000 (0:00:02.170) 0:57:50.984 ******** 2026-03-28 06:12:01.867473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 06:12:01.867500 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 06:12:01.867517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 06:12:01.867534 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 06:12:01.867575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 06:12:01.867592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 06:12:01.867610 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867628 | orchestrator | 2026-03-28 06:12:01.867645 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:12:01.867663 | orchestrator | Saturday 28 March 2026 06:11:45 +0000 (0:00:01.410) 0:57:52.395 ******** 2026-03-28 06:12:01.867682 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3 2026-03-28 06:12:01.867699 | orchestrator | 2026-03-28 06:12:01.867760 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:12:01.867795 | orchestrator | Saturday 28 March 2026 06:11:47 +0000 (0:00:01.273) 0:57:53.668 ******** 2026-03-28 06:12:01.867814 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867831 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867873 | orchestrator | 2026-03-28 06:12:01.867894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:12:01.867911 | orchestrator | Saturday 28 March 2026 06:11:48 +0000 (0:00:01.264) 0:57:54.932 ******** 2026-03-28 06:12:01.867930 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.867947 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.867961 | orchestrator | 2026-03-28 06:12:01.867977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:12:01.867993 | orchestrator | Saturday 28 March 2026 06:11:49 +0000 (0:00:01.246) 0:57:56.179 ******** 2026-03-28 06:12:01.868011 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.868029 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:01.868047 | orchestrator | 2026-03-28 06:12:01.868064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:12:01.868083 | orchestrator | Saturday 28 March 2026 06:11:50 +0000 (0:00:01.253) 0:57:57.432 ******** 2026-03-28 06:12:01.868101 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:01.868119 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:01.868137 | orchestrator | 2026-03-28 06:12:01.868155 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:12:01.868173 | orchestrator | Saturday 28 March 2026 06:11:52 +0000 (0:00:01.339) 0:57:58.772 ******** 2026-03-28 06:12:01.868190 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:12:01.868208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:12:01.868227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:12:01.868264 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.868283 | orchestrator | 2026-03-28 06:12:01.868302 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:12:01.868314 | orchestrator | Saturday 28 March 2026 06:11:54 +0000 (0:00:01.798) 0:58:00.570 ******** 2026-03-28 06:12:01.868325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:12:01.868336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:12:01.868347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:12:01.868359 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.868370 | orchestrator | 2026-03-28 06:12:01.868381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:12:01.868392 | orchestrator | Saturday 28 March 2026 06:11:55 +0000 (0:00:01.396) 0:58:01.966 ******** 2026-03-28 06:12:01.868420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:12:01.868440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:12:01.868452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:12:01.868463 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:01.868474 | orchestrator | 2026-03-28 06:12:01.868485 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:12:01.868496 | orchestrator | Saturday 28 March 2026 06:11:56 +0000 (0:00:01.396) 0:58:03.363 ******** 2026-03-28 06:12:01.868507 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:01.868519 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:01.868530 | orchestrator | 2026-03-28 06:12:01.868541 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:12:01.868552 | orchestrator | Saturday 28 March 2026 06:11:58 +0000 (0:00:01.260) 0:58:04.624 ******** 2026-03-28 06:12:01.868563 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 06:12:01.868574 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 06:12:01.868585 | orchestrator | 2026-03-28 06:12:01.868597 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:12:01.868608 | orchestrator | Saturday 28 March 2026 06:11:59 +0000 (0:00:01.472) 0:58:06.097 ******** 2026-03-28 06:12:01.868619 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:12:01.868630 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:12:01.868641 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:12:01.868652 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:12:01.868664 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 06:12:01.868675 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:12:01.868702 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:12:47.017743 | orchestrator | 2026-03-28 06:12:47.017950 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:12:47.017969 | orchestrator | Saturday 28 March 2026 06:12:01 +0000 (0:00:02.185) 0:58:08.282 ******** 2026-03-28 06:12:47.017982 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:12:47.017994 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:12:47.018006 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:12:47.018086 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:12:47.018099 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 06:12:47.018112 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:12:47.018123 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:12:47.018162 | orchestrator | 2026-03-28 06:12:47.018174 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-28 06:12:47.018185 | orchestrator | Saturday 28 March 2026 06:12:04 +0000 (0:00:02.718) 0:58:11.001 ******** 2026-03-28 06:12:47.018196 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018220 | orchestrator | 2026-03-28 06:12:47.018231 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:12:47.018243 | orchestrator | Saturday 28 March 2026 06:12:05 +0000 (0:00:01.293) 0:58:12.295 ******** 2026-03-28 06:12:47.018257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3 2026-03-28 06:12:47.018272 | orchestrator | 2026-03-28 06:12:47.018285 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:12:47.018298 | orchestrator | Saturday 28 March 2026 06:12:07 +0000 (0:00:01.543) 0:58:13.839 ******** 2026-03-28 06:12:47.018311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3 2026-03-28 06:12:47.018324 | orchestrator | 2026-03-28 06:12:47.018336 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:12:47.018349 | orchestrator | Saturday 28 March 2026 06:12:08 +0000 (0:00:01.241) 0:58:15.081 ******** 2026-03-28 06:12:47.018362 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018375 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018387 | orchestrator | 2026-03-28 06:12:47.018400 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:12:47.018413 | orchestrator | Saturday 28 March 2026 06:12:09 +0000 (0:00:01.269) 0:58:16.350 ******** 2026-03-28 06:12:47.018426 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.018439 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.018451 | orchestrator | 2026-03-28 06:12:47.018464 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:12:47.018477 | orchestrator | Saturday 28 March 2026 06:12:11 +0000 (0:00:01.733) 0:58:18.084 ******** 2026-03-28 06:12:47.018489 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.018503 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.018515 | orchestrator | 2026-03-28 06:12:47.018528 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:12:47.018541 | orchestrator | Saturday 28 March 2026 06:12:13 +0000 (0:00:01.743) 0:58:19.828 ******** 2026-03-28 06:12:47.018553 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.018566 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.018579 | orchestrator | 2026-03-28 06:12:47.018593 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:12:47.018605 | orchestrator | Saturday 28 March 2026 06:12:15 +0000 (0:00:01.805) 0:58:21.633 ******** 2026-03-28 06:12:47.018616 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018646 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018657 | orchestrator | 2026-03-28 06:12:47.018668 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:12:47.018678 | orchestrator | Saturday 28 March 2026 06:12:16 +0000 (0:00:01.285) 0:58:22.918 ******** 2026-03-28 06:12:47.018689 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018700 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018711 | orchestrator | 2026-03-28 06:12:47.018722 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:12:47.018733 | orchestrator | Saturday 28 March 2026 06:12:17 +0000 (0:00:01.291) 0:58:24.210 ******** 2026-03-28 06:12:47.018744 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018755 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018765 | orchestrator | 2026-03-28 06:12:47.018776 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:12:47.018787 | orchestrator | Saturday 28 March 2026 06:12:19 +0000 (0:00:01.228) 0:58:25.439 ******** 2026-03-28 06:12:47.018806 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.018817 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.018828 | orchestrator | 2026-03-28 06:12:47.018839 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:12:47.018910 | orchestrator | Saturday 28 March 2026 06:12:20 +0000 (0:00:01.712) 0:58:27.151 ******** 2026-03-28 06:12:47.018921 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.018932 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.018944 | orchestrator | 2026-03-28 06:12:47.018955 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:12:47.018966 | orchestrator | Saturday 28 March 2026 06:12:22 +0000 (0:00:01.707) 0:58:28.858 ******** 2026-03-28 06:12:47.018977 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.018988 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.018999 | orchestrator | 2026-03-28 06:12:47.019010 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:12:47.019021 | orchestrator | Saturday 28 March 2026 06:12:23 +0000 (0:00:01.298) 0:58:30.157 ******** 2026-03-28 06:12:47.019032 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019066 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019078 | orchestrator | 2026-03-28 06:12:47.019089 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:12:47.019100 | orchestrator | Saturday 28 March 2026 06:12:24 +0000 (0:00:01.242) 0:58:31.399 ******** 2026-03-28 06:12:47.019111 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.019121 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.019133 | orchestrator | 2026-03-28 06:12:47.019143 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:12:47.019154 | orchestrator | Saturday 28 March 2026 06:12:26 +0000 (0:00:01.344) 0:58:32.744 ******** 2026-03-28 06:12:47.019165 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.019176 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.019187 | orchestrator | 2026-03-28 06:12:47.019198 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:12:47.019208 | orchestrator | Saturday 28 March 2026 06:12:27 +0000 (0:00:01.282) 0:58:34.026 ******** 2026-03-28 06:12:47.019219 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.019230 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.019241 | orchestrator | 2026-03-28 06:12:47.019252 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:12:47.019263 | orchestrator | Saturday 28 March 2026 06:12:28 +0000 (0:00:01.308) 0:58:35.335 ******** 2026-03-28 06:12:47.019274 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019285 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019295 | orchestrator | 2026-03-28 06:12:47.019306 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:12:47.019317 | orchestrator | Saturday 28 March 2026 06:12:30 +0000 (0:00:01.242) 0:58:36.578 ******** 2026-03-28 06:12:47.019328 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019339 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019350 | orchestrator | 2026-03-28 06:12:47.019361 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:12:47.019372 | orchestrator | Saturday 28 March 2026 06:12:31 +0000 (0:00:01.343) 0:58:37.921 ******** 2026-03-28 06:12:47.019382 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019393 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019404 | orchestrator | 2026-03-28 06:12:47.019415 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:12:47.019426 | orchestrator | Saturday 28 March 2026 06:12:33 +0000 (0:00:01.584) 0:58:39.505 ******** 2026-03-28 06:12:47.019436 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.019447 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.019458 | orchestrator | 2026-03-28 06:12:47.019469 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:12:47.019480 | orchestrator | Saturday 28 March 2026 06:12:34 +0000 (0:00:01.269) 0:58:40.775 ******** 2026-03-28 06:12:47.019499 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:12:47.019510 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:12:47.019521 | orchestrator | 2026-03-28 06:12:47.019531 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:12:47.019542 | orchestrator | Saturday 28 March 2026 06:12:35 +0000 (0:00:01.243) 0:58:42.019 ******** 2026-03-28 06:12:47.019553 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019564 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019575 | orchestrator | 2026-03-28 06:12:47.019586 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:12:47.019597 | orchestrator | Saturday 28 March 2026 06:12:36 +0000 (0:00:01.346) 0:58:43.365 ******** 2026-03-28 06:12:47.019608 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019619 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019630 | orchestrator | 2026-03-28 06:12:47.019640 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:12:47.019651 | orchestrator | Saturday 28 March 2026 06:12:38 +0000 (0:00:01.285) 0:58:44.651 ******** 2026-03-28 06:12:47.019662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019673 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019684 | orchestrator | 2026-03-28 06:12:47.019694 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:12:47.019711 | orchestrator | Saturday 28 March 2026 06:12:39 +0000 (0:00:01.284) 0:58:45.936 ******** 2026-03-28 06:12:47.019722 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019744 | orchestrator | 2026-03-28 06:12:47.019755 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:12:47.019765 | orchestrator | Saturday 28 March 2026 06:12:40 +0000 (0:00:01.251) 0:58:47.188 ******** 2026-03-28 06:12:47.019776 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019787 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019797 | orchestrator | 2026-03-28 06:12:47.019808 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:12:47.019819 | orchestrator | Saturday 28 March 2026 06:12:41 +0000 (0:00:01.231) 0:58:48.420 ******** 2026-03-28 06:12:47.019830 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019863 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019875 | orchestrator | 2026-03-28 06:12:47.019886 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:12:47.019896 | orchestrator | Saturday 28 March 2026 06:12:43 +0000 (0:00:01.264) 0:58:49.685 ******** 2026-03-28 06:12:47.019907 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019918 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019929 | orchestrator | 2026-03-28 06:12:47.019940 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:12:47.019951 | orchestrator | Saturday 28 March 2026 06:12:44 +0000 (0:00:01.258) 0:58:50.943 ******** 2026-03-28 06:12:47.019961 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.019973 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.019983 | orchestrator | 2026-03-28 06:12:47.019994 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:12:47.020005 | orchestrator | Saturday 28 March 2026 06:12:45 +0000 (0:00:01.266) 0:58:52.209 ******** 2026-03-28 06:12:47.020016 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:12:47.020027 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:12:47.020037 | orchestrator | 2026-03-28 06:12:47.020055 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:13:32.805726 | orchestrator | Saturday 28 March 2026 06:12:46 +0000 (0:00:01.219) 0:58:53.429 ******** 2026-03-28 06:13:32.805911 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.805940 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.805957 | orchestrator | 2026-03-28 06:13:32.805994 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:13:32.806006 | orchestrator | Saturday 28 March 2026 06:12:48 +0000 (0:00:01.625) 0:58:55.054 ******** 2026-03-28 06:13:32.806069 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806083 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806094 | orchestrator | 2026-03-28 06:13:32.806106 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:13:32.806116 | orchestrator | Saturday 28 March 2026 06:12:49 +0000 (0:00:01.262) 0:58:56.317 ******** 2026-03-28 06:13:32.806127 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806138 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806158 | orchestrator | 2026-03-28 06:13:32.806170 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:13:32.806181 | orchestrator | Saturday 28 March 2026 06:12:51 +0000 (0:00:01.273) 0:58:57.590 ******** 2026-03-28 06:13:32.806191 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.806203 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.806214 | orchestrator | 2026-03-28 06:13:32.806224 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:13:32.806235 | orchestrator | Saturday 28 March 2026 06:12:53 +0000 (0:00:02.044) 0:58:59.635 ******** 2026-03-28 06:13:32.806246 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.806259 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.806272 | orchestrator | 2026-03-28 06:13:32.806285 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:13:32.806297 | orchestrator | Saturday 28 March 2026 06:12:55 +0000 (0:00:02.418) 0:59:02.053 ******** 2026-03-28 06:13:32.806310 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3 2026-03-28 06:13:32.806323 | orchestrator | 2026-03-28 06:13:32.806335 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:13:32.806348 | orchestrator | Saturday 28 March 2026 06:12:57 +0000 (0:00:01.515) 0:59:03.568 ******** 2026-03-28 06:13:32.806360 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806372 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806385 | orchestrator | 2026-03-28 06:13:32.806396 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:13:32.806410 | orchestrator | Saturday 28 March 2026 06:12:58 +0000 (0:00:01.250) 0:59:04.819 ******** 2026-03-28 06:13:32.806422 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806434 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806447 | orchestrator | 2026-03-28 06:13:32.806460 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:13:32.806471 | orchestrator | Saturday 28 March 2026 06:12:59 +0000 (0:00:01.242) 0:59:06.061 ******** 2026-03-28 06:13:32.806482 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:13:32.806493 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:13:32.806504 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:13:32.806515 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:13:32.806525 | orchestrator | 2026-03-28 06:13:32.806536 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:13:32.806546 | orchestrator | Saturday 28 March 2026 06:13:01 +0000 (0:00:01.990) 0:59:08.052 ******** 2026-03-28 06:13:32.806557 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.806568 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.806579 | orchestrator | 2026-03-28 06:13:32.806589 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:13:32.806614 | orchestrator | Saturday 28 March 2026 06:13:03 +0000 (0:00:01.620) 0:59:09.672 ******** 2026-03-28 06:13:32.806625 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806636 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806655 | orchestrator | 2026-03-28 06:13:32.806666 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:13:32.806677 | orchestrator | Saturday 28 March 2026 06:13:04 +0000 (0:00:01.269) 0:59:10.942 ******** 2026-03-28 06:13:32.806687 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806698 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806713 | orchestrator | 2026-03-28 06:13:32.806732 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:13:32.806743 | orchestrator | Saturday 28 March 2026 06:13:05 +0000 (0:00:01.350) 0:59:12.293 ******** 2026-03-28 06:13:32.806754 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.806765 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.806776 | orchestrator | 2026-03-28 06:13:32.806787 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:13:32.806798 | orchestrator | Saturday 28 March 2026 06:13:07 +0000 (0:00:01.307) 0:59:13.600 ******** 2026-03-28 06:13:32.806809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-3 2026-03-28 06:13:32.806820 | orchestrator | 2026-03-28 06:13:32.806831 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:13:32.806892 | orchestrator | Saturday 28 March 2026 06:13:08 +0000 (0:00:01.224) 0:59:14.825 ******** 2026-03-28 06:13:32.806904 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.806915 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.806926 | orchestrator | 2026-03-28 06:13:32.806937 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:13:32.806949 | orchestrator | Saturday 28 March 2026 06:13:10 +0000 (0:00:01.803) 0:59:16.629 ******** 2026-03-28 06:13:32.806960 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:13:32.806991 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:13:32.807002 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:13:32.807013 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807024 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:13:32.807035 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:13:32.807046 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:13:32.807057 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807068 | orchestrator | 2026-03-28 06:13:32.807079 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:13:32.807090 | orchestrator | Saturday 28 March 2026 06:13:11 +0000 (0:00:01.243) 0:59:17.872 ******** 2026-03-28 06:13:32.807101 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807111 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807122 | orchestrator | 2026-03-28 06:13:32.807133 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:13:32.807144 | orchestrator | Saturday 28 March 2026 06:13:12 +0000 (0:00:01.256) 0:59:19.129 ******** 2026-03-28 06:13:32.807155 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807166 | orchestrator | 2026-03-28 06:13:32.807177 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:13:32.807188 | orchestrator | Saturday 28 March 2026 06:13:13 +0000 (0:00:01.197) 0:59:20.327 ******** 2026-03-28 06:13:32.807198 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807220 | orchestrator | 2026-03-28 06:13:32.807231 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:13:32.807242 | orchestrator | Saturday 28 March 2026 06:13:15 +0000 (0:00:01.473) 0:59:21.801 ******** 2026-03-28 06:13:32.807253 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807264 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807283 | orchestrator | 2026-03-28 06:13:32.807294 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:13:32.807304 | orchestrator | Saturday 28 March 2026 06:13:16 +0000 (0:00:01.245) 0:59:23.046 ******** 2026-03-28 06:13:32.807315 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807326 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807337 | orchestrator | 2026-03-28 06:13:32.807348 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:13:32.807359 | orchestrator | Saturday 28 March 2026 06:13:18 +0000 (0:00:01.512) 0:59:24.559 ******** 2026-03-28 06:13:32.807370 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.807381 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.807392 | orchestrator | 2026-03-28 06:13:32.807403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:13:32.807413 | orchestrator | Saturday 28 March 2026 06:13:20 +0000 (0:00:02.652) 0:59:27.212 ******** 2026-03-28 06:13:32.807424 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:13:32.807435 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:13:32.807446 | orchestrator | 2026-03-28 06:13:32.807457 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:13:32.807468 | orchestrator | Saturday 28 March 2026 06:13:22 +0000 (0:00:01.299) 0:59:28.511 ******** 2026-03-28 06:13:32.807479 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3 2026-03-28 06:13:32.807491 | orchestrator | 2026-03-28 06:13:32.807501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:13:32.807512 | orchestrator | Saturday 28 March 2026 06:13:23 +0000 (0:00:01.495) 0:59:30.007 ******** 2026-03-28 06:13:32.807523 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807534 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807544 | orchestrator | 2026-03-28 06:13:32.807555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:13:32.807572 | orchestrator | Saturday 28 March 2026 06:13:24 +0000 (0:00:01.278) 0:59:31.285 ******** 2026-03-28 06:13:32.807583 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807594 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807605 | orchestrator | 2026-03-28 06:13:32.807616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:13:32.807627 | orchestrator | Saturday 28 March 2026 06:13:26 +0000 (0:00:01.257) 0:59:32.543 ******** 2026-03-28 06:13:32.807638 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807648 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807659 | orchestrator | 2026-03-28 06:13:32.807670 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:13:32.807681 | orchestrator | Saturday 28 March 2026 06:13:27 +0000 (0:00:01.265) 0:59:33.808 ******** 2026-03-28 06:13:32.807692 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807703 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807713 | orchestrator | 2026-03-28 06:13:32.807724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:13:32.807735 | orchestrator | Saturday 28 March 2026 06:13:28 +0000 (0:00:01.276) 0:59:35.085 ******** 2026-03-28 06:13:32.807746 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807757 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807768 | orchestrator | 2026-03-28 06:13:32.807778 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:13:32.807789 | orchestrator | Saturday 28 March 2026 06:13:29 +0000 (0:00:01.267) 0:59:36.353 ******** 2026-03-28 06:13:32.807800 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807811 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807822 | orchestrator | 2026-03-28 06:13:32.807833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:13:32.807862 | orchestrator | Saturday 28 March 2026 06:13:31 +0000 (0:00:01.279) 0:59:37.632 ******** 2026-03-28 06:13:32.807873 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:13:32.807891 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:13:32.807902 | orchestrator | 2026-03-28 06:13:32.807920 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:14:15.188091 | orchestrator | Saturday 28 March 2026 06:13:32 +0000 (0:00:01.593) 0:59:39.226 ******** 2026-03-28 06:14:15.188212 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.188230 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.188243 | orchestrator | 2026-03-28 06:14:15.188255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:14:15.188267 | orchestrator | Saturday 28 March 2026 06:13:34 +0000 (0:00:01.342) 0:59:40.569 ******** 2026-03-28 06:14:15.188279 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:14:15.188291 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:14:15.188302 | orchestrator | 2026-03-28 06:14:15.188314 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:14:15.188325 | orchestrator | Saturday 28 March 2026 06:13:35 +0000 (0:00:01.236) 0:59:41.806 ******** 2026-03-28 06:14:15.188337 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3 2026-03-28 06:14:15.188348 | orchestrator | 2026-03-28 06:14:15.188360 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:14:15.188371 | orchestrator | Saturday 28 March 2026 06:13:36 +0000 (0:00:01.282) 0:59:43.088 ******** 2026-03-28 06:14:15.188382 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 06:14:15.188394 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 06:14:15.188406 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 06:14:15.188417 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 06:14:15.188428 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 06:14:15.188439 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 06:14:15.188451 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 06:14:15.188462 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 06:14:15.188473 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 06:14:15.188484 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 06:14:15.188495 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 06:14:15.188506 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 06:14:15.188517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 06:14:15.188528 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 06:14:15.188539 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:14:15.188550 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:14:15.188562 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:14:15.188573 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:14:15.188584 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:14:15.188596 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:14:15.188607 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:14:15.188618 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:14:15.188631 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:14:15.188644 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:14:15.188657 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:14:15.188670 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:14:15.188682 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:14:15.188695 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:14:15.188745 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 06:14:15.188759 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 06:14:15.188773 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 06:14:15.188786 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 06:14:15.188799 | orchestrator | 2026-03-28 06:14:15.188813 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:14:15.188826 | orchestrator | Saturday 28 March 2026 06:13:43 +0000 (0:00:07.126) 0:59:50.215 ******** 2026-03-28 06:14:15.188864 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3 2026-03-28 06:14:15.188876 | orchestrator | 2026-03-28 06:14:15.188890 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:14:15.188903 | orchestrator | Saturday 28 March 2026 06:13:45 +0000 (0:00:01.295) 0:59:51.511 ******** 2026-03-28 06:14:15.188916 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.188931 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.188943 | orchestrator | 2026-03-28 06:14:15.188955 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:14:15.188968 | orchestrator | Saturday 28 March 2026 06:13:46 +0000 (0:00:01.616) 0:59:53.127 ******** 2026-03-28 06:14:15.188980 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.188991 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.189002 | orchestrator | 2026-03-28 06:14:15.189013 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:14:15.189054 | orchestrator | Saturday 28 March 2026 06:13:49 +0000 (0:00:02.664) 0:59:55.792 ******** 2026-03-28 06:14:15.189066 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189078 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189100 | orchestrator | 2026-03-28 06:14:15.189111 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:14:15.189122 | orchestrator | Saturday 28 March 2026 06:13:50 +0000 (0:00:01.232) 0:59:57.025 ******** 2026-03-28 06:14:15.189133 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189144 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189155 | orchestrator | 2026-03-28 06:14:15.189166 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:14:15.189177 | orchestrator | Saturday 28 March 2026 06:13:51 +0000 (0:00:01.302) 0:59:58.327 ******** 2026-03-28 06:14:15.189188 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189199 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189210 | orchestrator | 2026-03-28 06:14:15.189221 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:14:15.189232 | orchestrator | Saturday 28 March 2026 06:13:53 +0000 (0:00:01.588) 0:59:59.917 ******** 2026-03-28 06:14:15.189243 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189255 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189266 | orchestrator | 2026-03-28 06:14:15.189277 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:14:15.189288 | orchestrator | Saturday 28 March 2026 06:13:54 +0000 (0:00:01.247) 1:00:01.164 ******** 2026-03-28 06:14:15.189299 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189310 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189321 | orchestrator | 2026-03-28 06:14:15.189332 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:14:15.189343 | orchestrator | Saturday 28 March 2026 06:13:56 +0000 (0:00:01.297) 1:00:02.462 ******** 2026-03-28 06:14:15.189363 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189385 | orchestrator | 2026-03-28 06:14:15.189396 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:14:15.189407 | orchestrator | Saturday 28 March 2026 06:13:57 +0000 (0:00:01.268) 1:00:03.730 ******** 2026-03-28 06:14:15.189418 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189429 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189440 | orchestrator | 2026-03-28 06:14:15.189451 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:14:15.189462 | orchestrator | Saturday 28 March 2026 06:13:58 +0000 (0:00:01.276) 1:00:05.006 ******** 2026-03-28 06:14:15.189473 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189484 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189495 | orchestrator | 2026-03-28 06:14:15.189506 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:14:15.189716 | orchestrator | Saturday 28 March 2026 06:13:59 +0000 (0:00:01.243) 1:00:06.250 ******** 2026-03-28 06:14:15.189734 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189752 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189770 | orchestrator | 2026-03-28 06:14:15.189789 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:14:15.189808 | orchestrator | Saturday 28 March 2026 06:14:01 +0000 (0:00:01.269) 1:00:07.520 ******** 2026-03-28 06:14:15.189826 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189868 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189879 | orchestrator | 2026-03-28 06:14:15.189890 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:14:15.189901 | orchestrator | Saturday 28 March 2026 06:14:02 +0000 (0:00:01.666) 1:00:09.187 ******** 2026-03-28 06:14:15.189912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:14:15.189922 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:14:15.189933 | orchestrator | 2026-03-28 06:14:15.189953 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:14:15.189964 | orchestrator | Saturday 28 March 2026 06:14:04 +0000 (0:00:01.358) 1:00:10.546 ******** 2026-03-28 06:14:15.189975 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:14:15.189986 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:14:15.189997 | orchestrator | 2026-03-28 06:14:15.190008 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:14:15.190075 | orchestrator | Saturday 28 March 2026 06:14:08 +0000 (0:00:04.610) 1:00:15.156 ******** 2026-03-28 06:14:15.190087 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.190098 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:14:15.190109 | orchestrator | 2026-03-28 06:14:15.190120 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:14:15.190131 | orchestrator | Saturday 28 March 2026 06:14:10 +0000 (0:00:01.304) 1:00:16.461 ******** 2026-03-28 06:14:15.190144 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-28 06:14:15.190171 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-28 06:15:04.358246 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-28 06:15:04.358329 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-28 06:15:04.358335 | orchestrator | 2026-03-28 06:15:04.358340 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:15:04.358345 | orchestrator | Saturday 28 March 2026 06:14:15 +0000 (0:00:05.147) 1:00:21.609 ******** 2026-03-28 06:15:04.358350 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358355 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358359 | orchestrator | 2026-03-28 06:15:04.358363 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:15:04.358367 | orchestrator | Saturday 28 March 2026 06:14:16 +0000 (0:00:01.283) 1:00:22.893 ******** 2026-03-28 06:15:04.358371 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358378 | orchestrator | 2026-03-28 06:15:04.358383 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:15:04.358388 | orchestrator | Saturday 28 March 2026 06:14:17 +0000 (0:00:01.331) 1:00:24.224 ******** 2026-03-28 06:15:04.358392 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358396 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358400 | orchestrator | 2026-03-28 06:15:04.358403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:15:04.358407 | orchestrator | Saturday 28 March 2026 06:14:19 +0000 (0:00:01.442) 1:00:25.666 ******** 2026-03-28 06:15:04.358411 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358415 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358419 | orchestrator | 2026-03-28 06:15:04.358423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:15:04.358426 | orchestrator | Saturday 28 March 2026 06:14:20 +0000 (0:00:01.328) 1:00:26.995 ******** 2026-03-28 06:15:04.358430 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358434 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358438 | orchestrator | 2026-03-28 06:15:04.358442 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:15:04.358446 | orchestrator | Saturday 28 March 2026 06:14:21 +0000 (0:00:01.248) 1:00:28.243 ******** 2026-03-28 06:15:04.358450 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358455 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358458 | orchestrator | 2026-03-28 06:15:04.358463 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:15:04.358466 | orchestrator | Saturday 28 March 2026 06:14:23 +0000 (0:00:01.472) 1:00:29.716 ******** 2026-03-28 06:15:04.358470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:15:04.358474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:15:04.358478 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:15:04.358493 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358497 | orchestrator | 2026-03-28 06:15:04.358501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:15:04.358504 | orchestrator | Saturday 28 March 2026 06:14:24 +0000 (0:00:01.505) 1:00:31.222 ******** 2026-03-28 06:15:04.358508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:15:04.358525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:15:04.358529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:15:04.358533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358537 | orchestrator | 2026-03-28 06:15:04.358540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:15:04.358544 | orchestrator | Saturday 28 March 2026 06:14:26 +0000 (0:00:01.534) 1:00:32.757 ******** 2026-03-28 06:15:04.358548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:15:04.358552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:15:04.358556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:15:04.358559 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358563 | orchestrator | 2026-03-28 06:15:04.358567 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:15:04.358571 | orchestrator | Saturday 28 March 2026 06:14:28 +0000 (0:00:01.846) 1:00:34.603 ******** 2026-03-28 06:15:04.358574 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358578 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358582 | orchestrator | 2026-03-28 06:15:04.358586 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:15:04.358590 | orchestrator | Saturday 28 March 2026 06:14:29 +0000 (0:00:01.397) 1:00:36.000 ******** 2026-03-28 06:15:04.358593 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 06:15:04.358597 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 06:15:04.358601 | orchestrator | 2026-03-28 06:15:04.358605 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:15:04.358609 | orchestrator | Saturday 28 March 2026 06:14:31 +0000 (0:00:01.488) 1:00:37.489 ******** 2026-03-28 06:15:04.358612 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358616 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358620 | orchestrator | 2026-03-28 06:15:04.358632 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 06:15:04.358637 | orchestrator | Saturday 28 March 2026 06:14:33 +0000 (0:00:01.949) 1:00:39.438 ******** 2026-03-28 06:15:04.358640 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358644 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358648 | orchestrator | 2026-03-28 06:15:04.358652 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 06:15:04.358656 | orchestrator | Saturday 28 March 2026 06:14:34 +0000 (0:00:01.299) 1:00:40.738 ******** 2026-03-28 06:15:04.358659 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3 2026-03-28 06:15:04.358664 | orchestrator | 2026-03-28 06:15:04.358668 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 06:15:04.358672 | orchestrator | Saturday 28 March 2026 06:14:35 +0000 (0:00:01.458) 1:00:42.197 ******** 2026-03-28 06:15:04.358676 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 06:15:04.358679 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 06:15:04.358683 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-28 06:15:04.358687 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-28 06:15:04.358691 | orchestrator | 2026-03-28 06:15:04.358694 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 06:15:04.358698 | orchestrator | Saturday 28 March 2026 06:14:37 +0000 (0:00:02.014) 1:00:44.212 ******** 2026-03-28 06:15:04.358702 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:15:04.358706 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 06:15:04.358709 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:15:04.358713 | orchestrator | 2026-03-28 06:15:04.358717 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:15:04.358724 | orchestrator | Saturday 28 March 2026 06:14:40 +0000 (0:00:03.142) 1:00:47.355 ******** 2026-03-28 06:15:04.358728 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-28 06:15:04.358732 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 06:15:04.358736 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358740 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-28 06:15:04.358744 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 06:15:04.358747 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358751 | orchestrator | 2026-03-28 06:15:04.358755 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 06:15:04.358759 | orchestrator | Saturday 28 March 2026 06:14:42 +0000 (0:00:02.063) 1:00:49.418 ******** 2026-03-28 06:15:04.358763 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358766 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358770 | orchestrator | 2026-03-28 06:15:04.358774 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 06:15:04.358778 | orchestrator | Saturday 28 March 2026 06:14:44 +0000 (0:00:01.622) 1:00:51.041 ******** 2026-03-28 06:15:04.358781 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.358785 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:04.358789 | orchestrator | 2026-03-28 06:15:04.358793 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 06:15:04.358797 | orchestrator | Saturday 28 March 2026 06:14:45 +0000 (0:00:01.263) 1:00:52.304 ******** 2026-03-28 06:15:04.358801 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-3 2026-03-28 06:15:04.358805 | orchestrator | 2026-03-28 06:15:04.358808 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 06:15:04.358815 | orchestrator | Saturday 28 March 2026 06:14:47 +0000 (0:00:01.530) 1:00:53.835 ******** 2026-03-28 06:15:04.358820 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3 2026-03-28 06:15:04.358868 | orchestrator | 2026-03-28 06:15:04.358875 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 06:15:04.358881 | orchestrator | Saturday 28 March 2026 06:14:48 +0000 (0:00:01.232) 1:00:55.067 ******** 2026-03-28 06:15:04.358887 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358891 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358896 | orchestrator | 2026-03-28 06:15:04.358901 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 06:15:04.358905 | orchestrator | Saturday 28 March 2026 06:14:50 +0000 (0:00:02.171) 1:00:57.239 ******** 2026-03-28 06:15:04.358909 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358914 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358918 | orchestrator | 2026-03-28 06:15:04.358923 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 06:15:04.358927 | orchestrator | Saturday 28 March 2026 06:14:52 +0000 (0:00:02.050) 1:00:59.290 ******** 2026-03-28 06:15:04.358932 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358936 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358941 | orchestrator | 2026-03-28 06:15:04.358945 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 06:15:04.358949 | orchestrator | Saturday 28 March 2026 06:14:55 +0000 (0:00:02.475) 1:01:01.766 ******** 2026-03-28 06:15:04.358954 | orchestrator | changed: [testbed-node-4] 2026-03-28 06:15:04.358958 | orchestrator | changed: [testbed-node-3] 2026-03-28 06:15:04.358963 | orchestrator | 2026-03-28 06:15:04.358967 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 06:15:04.358972 | orchestrator | Saturday 28 March 2026 06:14:59 +0000 (0:00:03.781) 1:01:05.547 ******** 2026-03-28 06:15:04.358976 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:15:04.358980 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:04.358985 | orchestrator | 2026-03-28 06:15:04.358989 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-28 06:15:04.358998 | orchestrator | Saturday 28 March 2026 06:15:00 +0000 (0:00:01.804) 1:01:07.352 ******** 2026-03-28 06:15:04.359002 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:15:04.359010 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:15:27.289253 | orchestrator | 2026-03-28 06:15:27.289373 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-28 06:15:27.289392 | orchestrator | 2026-03-28 06:15:27.289404 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:15:27.289416 | orchestrator | Saturday 28 March 2026 06:15:04 +0000 (0:00:03.424) 1:01:10.776 ******** 2026-03-28 06:15:27.289427 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-28 06:15:27.289438 | orchestrator | 2026-03-28 06:15:27.289450 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:15:27.289461 | orchestrator | Saturday 28 March 2026 06:15:05 +0000 (0:00:01.162) 1:01:11.938 ******** 2026-03-28 06:15:27.289472 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289484 | orchestrator | 2026-03-28 06:15:27.289496 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:15:27.289507 | orchestrator | Saturday 28 March 2026 06:15:06 +0000 (0:00:01.474) 1:01:13.413 ******** 2026-03-28 06:15:27.289518 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289529 | orchestrator | 2026-03-28 06:15:27.289540 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:15:27.289551 | orchestrator | Saturday 28 March 2026 06:15:08 +0000 (0:00:01.107) 1:01:14.521 ******** 2026-03-28 06:15:27.289562 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289573 | orchestrator | 2026-03-28 06:15:27.289584 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:15:27.289595 | orchestrator | Saturday 28 March 2026 06:15:09 +0000 (0:00:01.450) 1:01:15.971 ******** 2026-03-28 06:15:27.289606 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289618 | orchestrator | 2026-03-28 06:15:27.289629 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:15:27.289640 | orchestrator | Saturday 28 March 2026 06:15:10 +0000 (0:00:01.130) 1:01:17.101 ******** 2026-03-28 06:15:27.289651 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289662 | orchestrator | 2026-03-28 06:15:27.289673 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:15:27.289684 | orchestrator | Saturday 28 March 2026 06:15:11 +0000 (0:00:01.112) 1:01:18.214 ******** 2026-03-28 06:15:27.289695 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289706 | orchestrator | 2026-03-28 06:15:27.289717 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:15:27.289729 | orchestrator | Saturday 28 March 2026 06:15:12 +0000 (0:00:01.121) 1:01:19.335 ******** 2026-03-28 06:15:27.289740 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:27.289752 | orchestrator | 2026-03-28 06:15:27.289763 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:15:27.289774 | orchestrator | Saturday 28 March 2026 06:15:14 +0000 (0:00:01.168) 1:01:20.504 ******** 2026-03-28 06:15:27.289786 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289799 | orchestrator | 2026-03-28 06:15:27.289812 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:15:27.289869 | orchestrator | Saturday 28 March 2026 06:15:15 +0000 (0:00:01.146) 1:01:21.650 ******** 2026-03-28 06:15:27.289883 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:15:27.289896 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:15:27.289908 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:15:27.289920 | orchestrator | 2026-03-28 06:15:27.289933 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:15:27.289945 | orchestrator | Saturday 28 March 2026 06:15:17 +0000 (0:00:01.841) 1:01:23.491 ******** 2026-03-28 06:15:27.289980 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:27.289992 | orchestrator | 2026-03-28 06:15:27.290075 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:15:27.290090 | orchestrator | Saturday 28 March 2026 06:15:18 +0000 (0:00:01.236) 1:01:24.728 ******** 2026-03-28 06:15:27.290101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:15:27.290112 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:15:27.290122 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:15:27.290134 | orchestrator | 2026-03-28 06:15:27.290145 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:15:27.290155 | orchestrator | Saturday 28 March 2026 06:15:21 +0000 (0:00:02.961) 1:01:27.689 ******** 2026-03-28 06:15:27.290166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 06:15:27.290178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 06:15:27.290189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 06:15:27.290200 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:27.290211 | orchestrator | 2026-03-28 06:15:27.290222 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:15:27.290233 | orchestrator | Saturday 28 March 2026 06:15:22 +0000 (0:00:01.426) 1:01:29.116 ******** 2026-03-28 06:15:27.290246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290304 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:27.290315 | orchestrator | 2026-03-28 06:15:27.290326 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:15:27.290337 | orchestrator | Saturday 28 March 2026 06:15:24 +0000 (0:00:02.114) 1:01:31.230 ******** 2026-03-28 06:15:27.290350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:27.290386 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:27.290407 | orchestrator | 2026-03-28 06:15:27.290418 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:15:27.290429 | orchestrator | Saturday 28 March 2026 06:15:25 +0000 (0:00:01.159) 1:01:32.389 ******** 2026-03-28 06:15:27.290448 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:15:18.827233', 'end': '2026-03-28 06:15:18.863781', 'delta': '0:00:00.036548', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:15:27.290463 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:15:19.347762', 'end': '2026-03-28 06:15:19.390710', 'delta': '0:00:00.042948', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:15:27.290475 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:15:19.914495', 'end': '2026-03-28 06:15:19.962009', 'delta': '0:00:00.047514', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:15:27.290486 | orchestrator | 2026-03-28 06:15:27.290505 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:15:45.489110 | orchestrator | Saturday 28 March 2026 06:15:27 +0000 (0:00:01.315) 1:01:33.705 ******** 2026-03-28 06:15:45.489227 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489245 | orchestrator | 2026-03-28 06:15:45.489257 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:15:45.489269 | orchestrator | Saturday 28 March 2026 06:15:28 +0000 (0:00:01.278) 1:01:34.983 ******** 2026-03-28 06:15:45.489281 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489293 | orchestrator | 2026-03-28 06:15:45.489304 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:15:45.489315 | orchestrator | Saturday 28 March 2026 06:15:30 +0000 (0:00:01.669) 1:01:36.653 ******** 2026-03-28 06:15:45.489326 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489337 | orchestrator | 2026-03-28 06:15:45.489349 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:15:45.489360 | orchestrator | Saturday 28 March 2026 06:15:31 +0000 (0:00:01.257) 1:01:37.910 ******** 2026-03-28 06:15:45.489371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:15:45.489382 | orchestrator | 2026-03-28 06:15:45.489393 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:15:45.489404 | orchestrator | Saturday 28 March 2026 06:15:33 +0000 (0:00:01.999) 1:01:39.910 ******** 2026-03-28 06:15:45.489440 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489452 | orchestrator | 2026-03-28 06:15:45.489463 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:15:45.489474 | orchestrator | Saturday 28 March 2026 06:15:34 +0000 (0:00:01.165) 1:01:41.075 ******** 2026-03-28 06:15:45.489485 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489496 | orchestrator | 2026-03-28 06:15:45.489508 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:15:45.489519 | orchestrator | Saturday 28 March 2026 06:15:35 +0000 (0:00:01.225) 1:01:42.301 ******** 2026-03-28 06:15:45.489529 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489540 | orchestrator | 2026-03-28 06:15:45.489551 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:15:45.489562 | orchestrator | Saturday 28 March 2026 06:15:37 +0000 (0:00:01.247) 1:01:43.548 ******** 2026-03-28 06:15:45.489573 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489583 | orchestrator | 2026-03-28 06:15:45.489594 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:15:45.489608 | orchestrator | Saturday 28 March 2026 06:15:38 +0000 (0:00:01.139) 1:01:44.687 ******** 2026-03-28 06:15:45.489626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489642 | orchestrator | 2026-03-28 06:15:45.489655 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:15:45.489668 | orchestrator | Saturday 28 March 2026 06:15:39 +0000 (0:00:01.139) 1:01:45.826 ******** 2026-03-28 06:15:45.489681 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489694 | orchestrator | 2026-03-28 06:15:45.489707 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:15:45.489720 | orchestrator | Saturday 28 March 2026 06:15:40 +0000 (0:00:01.181) 1:01:47.008 ******** 2026-03-28 06:15:45.489733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489745 | orchestrator | 2026-03-28 06:15:45.489758 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:15:45.489771 | orchestrator | Saturday 28 March 2026 06:15:41 +0000 (0:00:01.133) 1:01:48.142 ******** 2026-03-28 06:15:45.489783 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489796 | orchestrator | 2026-03-28 06:15:45.489809 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:15:45.489871 | orchestrator | Saturday 28 March 2026 06:15:42 +0000 (0:00:01.183) 1:01:49.326 ******** 2026-03-28 06:15:45.489894 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:45.489913 | orchestrator | 2026-03-28 06:15:45.489934 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:15:45.489953 | orchestrator | Saturday 28 March 2026 06:15:44 +0000 (0:00:01.148) 1:01:50.474 ******** 2026-03-28 06:15:45.489969 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:15:45.489981 | orchestrator | 2026-03-28 06:15:45.489995 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:15:45.490007 | orchestrator | Saturday 28 March 2026 06:15:45 +0000 (0:00:01.177) 1:01:51.652 ******** 2026-03-28 06:15:45.490077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:45.490097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}})  2026-03-28 06:15:45.490143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:15:45.490157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}})  2026-03-28 06:15:45.490170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:45.490182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:45.490201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:15:45.490214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:45.490226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:15:45.490252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:47.385379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}})  2026-03-28 06:15:47.385478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}})  2026-03-28 06:15:47.385494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:47.385527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:15:47.385580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:47.385593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:15:47.385604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:15:47.385620 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:15:47.385639 | orchestrator | 2026-03-28 06:15:47.385657 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:15:47.385676 | orchestrator | Saturday 28 March 2026 06:15:46 +0000 (0:00:01.455) 1:01:53.108 ******** 2026-03-28 06:15:47.385688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:47.385706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb', 'dm-uuid-LVM-Y0MPw6eQ99Z3dV2pgIWJl2qW0TNHtp82LwCUZLDKZAy8wkYZqpXvtrp18Yz7gDl7'], 'uuids': ['6592ff2e-d639-4ef0-97cb-82fd6b229dbc'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:47.385726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2', 'scsi-SQEMU_QEMU_HARDDISK_ca153e9b-7080-4ee3-8b85-a6ac7f502dd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ca153e9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:47.385745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-CPsN5y-Qc2O-KgJw-o91L-C21j-cnCu-HRp1Od', 'scsi-0QEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e', 'scsi-SQEMU_QEMU_HARDDISK_56fe6360-407e-41e5-aa3f-c02b23be8c9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG', 'dm-uuid-CRYPT-LUKS2-8305ad77be294b18b3d0e842513dca1b-GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e94d822c--120c--5920--885f--96546946f9a0-osd--block--e94d822c--120c--5920--885f--96546946f9a0', 'dm-uuid-LVM-SuK8J9HN5FRV1XXtp8J1DDHtwGBaQSgJGF3jH1XCnn0zR5RKAUmdAoCAutn0e1qG'], 'uuids': ['8305ad77-be29-4b18-b3d0-e842513dca1b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56fe6360', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GF3jH1-XCnn-0zR5-RKAU-mdAo-CAut-n0e1qG']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jmqra6-7GzY-EUqO-rL2j-tyrb-dfmO-nkVfHH', 'scsi-0QEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94', 'scsi-SQEMU_QEMU_HARDDISK_ff7faa01-13ed-42f1-881f-ea73c666aa94'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff7faa01', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--97a2d1a8--b450--5e97--9b32--db4bafa583cb-osd--block--97a2d1a8--b450--5e97--9b32--db4bafa583cb']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:15:48.659776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0af52fc6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1', 'scsi-SQEMU_QEMU_HARDDISK_0af52fc6-9f61-4e53-b423-bede1fc620c7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:16:17.118756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:16:17.118937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:16:17.118982 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7', 'dm-uuid-CRYPT-LUKS2-6592ff2ed6394ef097cb82fd6b229dbc-LwCUZL-DKZA-y8wk-YZqp-Xvtr-p18Y-z7gDl7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:16:17.118996 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119009 | orchestrator | 2026-03-28 06:16:17.119021 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:16:17.119033 | orchestrator | Saturday 28 March 2026 06:15:48 +0000 (0:00:01.978) 1:01:55.087 ******** 2026-03-28 06:16:17.119044 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:16:17.119056 | orchestrator | 2026-03-28 06:16:17.119067 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:16:17.119078 | orchestrator | Saturday 28 March 2026 06:15:50 +0000 (0:00:01.510) 1:01:56.597 ******** 2026-03-28 06:16:17.119089 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:16:17.119100 | orchestrator | 2026-03-28 06:16:17.119110 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:16:17.119121 | orchestrator | Saturday 28 March 2026 06:15:51 +0000 (0:00:01.273) 1:01:57.871 ******** 2026-03-28 06:16:17.119133 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:16:17.119144 | orchestrator | 2026-03-28 06:16:17.119154 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:16:17.119165 | orchestrator | Saturday 28 March 2026 06:15:52 +0000 (0:00:01.479) 1:01:59.350 ******** 2026-03-28 06:16:17.119176 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119187 | orchestrator | 2026-03-28 06:16:17.119197 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:16:17.119208 | orchestrator | Saturday 28 March 2026 06:15:54 +0000 (0:00:01.151) 1:02:00.502 ******** 2026-03-28 06:16:17.119219 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119230 | orchestrator | 2026-03-28 06:16:17.119240 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:16:17.119251 | orchestrator | Saturday 28 March 2026 06:15:55 +0000 (0:00:01.297) 1:02:01.799 ******** 2026-03-28 06:16:17.119262 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119272 | orchestrator | 2026-03-28 06:16:17.119283 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:16:17.119294 | orchestrator | Saturday 28 March 2026 06:15:56 +0000 (0:00:01.140) 1:02:02.940 ******** 2026-03-28 06:16:17.119305 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 06:16:17.119316 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 06:16:17.119327 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 06:16:17.119337 | orchestrator | 2026-03-28 06:16:17.119348 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:16:17.119359 | orchestrator | Saturday 28 March 2026 06:15:58 +0000 (0:00:01.764) 1:02:04.704 ******** 2026-03-28 06:16:17.119370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 06:16:17.119381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 06:16:17.119392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 06:16:17.119402 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119413 | orchestrator | 2026-03-28 06:16:17.119424 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:16:17.119434 | orchestrator | Saturday 28 March 2026 06:15:59 +0000 (0:00:01.217) 1:02:05.921 ******** 2026-03-28 06:16:17.119472 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-28 06:16:17.119484 | orchestrator | 2026-03-28 06:16:17.119496 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:16:17.119508 | orchestrator | Saturday 28 March 2026 06:16:00 +0000 (0:00:01.124) 1:02:07.046 ******** 2026-03-28 06:16:17.119519 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119529 | orchestrator | 2026-03-28 06:16:17.119540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:16:17.119551 | orchestrator | Saturday 28 March 2026 06:16:01 +0000 (0:00:01.241) 1:02:08.288 ******** 2026-03-28 06:16:17.119562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119573 | orchestrator | 2026-03-28 06:16:17.119583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:16:17.119594 | orchestrator | Saturday 28 March 2026 06:16:02 +0000 (0:00:01.122) 1:02:09.411 ******** 2026-03-28 06:16:17.119605 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119616 | orchestrator | 2026-03-28 06:16:17.119626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:16:17.119643 | orchestrator | Saturday 28 March 2026 06:16:04 +0000 (0:00:01.199) 1:02:10.610 ******** 2026-03-28 06:16:17.119654 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:16:17.119665 | orchestrator | 2026-03-28 06:16:17.119676 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:16:17.119703 | orchestrator | Saturday 28 March 2026 06:16:05 +0000 (0:00:01.226) 1:02:11.837 ******** 2026-03-28 06:16:17.119725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:16:17.119736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:16:17.119747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:16:17.119758 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119769 | orchestrator | 2026-03-28 06:16:17.119779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:16:17.119790 | orchestrator | Saturday 28 March 2026 06:16:06 +0000 (0:00:01.466) 1:02:13.303 ******** 2026-03-28 06:16:17.119801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:16:17.119832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:16:17.119844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:16:17.119855 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119866 | orchestrator | 2026-03-28 06:16:17.119877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:16:17.119888 | orchestrator | Saturday 28 March 2026 06:16:08 +0000 (0:00:01.453) 1:02:14.757 ******** 2026-03-28 06:16:17.119899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:16:17.119909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:16:17.119920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:16:17.119931 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:16:17.119942 | orchestrator | 2026-03-28 06:16:17.119953 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:16:17.119963 | orchestrator | Saturday 28 March 2026 06:16:09 +0000 (0:00:01.383) 1:02:16.141 ******** 2026-03-28 06:16:17.119974 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:16:17.119985 | orchestrator | 2026-03-28 06:16:17.119996 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:16:17.120007 | orchestrator | Saturday 28 March 2026 06:16:10 +0000 (0:00:01.117) 1:02:17.258 ******** 2026-03-28 06:16:17.120018 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 06:16:17.120029 | orchestrator | 2026-03-28 06:16:17.120043 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:16:17.120073 | orchestrator | Saturday 28 March 2026 06:16:12 +0000 (0:00:01.421) 1:02:18.680 ******** 2026-03-28 06:16:17.120093 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:16:17.120111 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:16:17.120130 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:16:17.120148 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 06:16:17.120166 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:16:17.120184 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:16:17.120203 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:16:17.120221 | orchestrator | 2026-03-28 06:16:17.120240 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:16:17.120259 | orchestrator | Saturday 28 March 2026 06:16:14 +0000 (0:00:02.186) 1:02:20.866 ******** 2026-03-28 06:16:17.120279 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:16:17.120298 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:16:17.120318 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:16:17.120336 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 06:16:17.120348 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:16:17.120359 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:16:17.120370 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:16:17.120381 | orchestrator | 2026-03-28 06:16:17.120403 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-28 06:17:11.465596 | orchestrator | Saturday 28 March 2026 06:16:17 +0000 (0:00:02.672) 1:02:23.539 ******** 2026-03-28 06:17:11.465713 | orchestrator | changed: [testbed-node-3] 2026-03-28 06:17:11.465729 | orchestrator | 2026-03-28 06:17:11.465741 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-28 06:17:11.465753 | orchestrator | Saturday 28 March 2026 06:16:19 +0000 (0:00:02.314) 1:02:25.853 ******** 2026-03-28 06:17:11.465765 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:17:11.465777 | orchestrator | 2026-03-28 06:17:11.465788 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-28 06:17:11.465800 | orchestrator | Saturday 28 March 2026 06:16:22 +0000 (0:00:03.207) 1:02:29.061 ******** 2026-03-28 06:17:11.465873 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:17:11.465885 | orchestrator | 2026-03-28 06:17:11.465896 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:17:11.465924 | orchestrator | Saturday 28 March 2026 06:16:25 +0000 (0:00:02.408) 1:02:31.469 ******** 2026-03-28 06:17:11.465936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-28 06:17:11.465947 | orchestrator | 2026-03-28 06:17:11.465958 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:17:11.465969 | orchestrator | Saturday 28 March 2026 06:16:26 +0000 (0:00:01.291) 1:02:32.761 ******** 2026-03-28 06:17:11.465980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-28 06:17:11.465991 | orchestrator | 2026-03-28 06:17:11.466004 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:17:11.466073 | orchestrator | Saturday 28 March 2026 06:16:27 +0000 (0:00:01.199) 1:02:33.960 ******** 2026-03-28 06:17:11.466110 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466122 | orchestrator | 2026-03-28 06:17:11.466136 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:17:11.466148 | orchestrator | Saturday 28 March 2026 06:16:28 +0000 (0:00:01.177) 1:02:35.138 ******** 2026-03-28 06:17:11.466161 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466174 | orchestrator | 2026-03-28 06:17:11.466188 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:17:11.466200 | orchestrator | Saturday 28 March 2026 06:16:30 +0000 (0:00:01.592) 1:02:36.731 ******** 2026-03-28 06:17:11.466212 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466225 | orchestrator | 2026-03-28 06:17:11.466238 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:17:11.466250 | orchestrator | Saturday 28 March 2026 06:16:31 +0000 (0:00:01.640) 1:02:38.372 ******** 2026-03-28 06:17:11.466262 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466275 | orchestrator | 2026-03-28 06:17:11.466287 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:17:11.466300 | orchestrator | Saturday 28 March 2026 06:16:33 +0000 (0:00:01.546) 1:02:39.918 ******** 2026-03-28 06:17:11.466312 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466325 | orchestrator | 2026-03-28 06:17:11.466337 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:17:11.466350 | orchestrator | Saturday 28 March 2026 06:16:34 +0000 (0:00:01.132) 1:02:41.050 ******** 2026-03-28 06:17:11.466363 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466375 | orchestrator | 2026-03-28 06:17:11.466387 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:17:11.466400 | orchestrator | Saturday 28 March 2026 06:16:35 +0000 (0:00:01.157) 1:02:42.208 ******** 2026-03-28 06:17:11.466413 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466425 | orchestrator | 2026-03-28 06:17:11.466438 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:17:11.466450 | orchestrator | Saturday 28 March 2026 06:16:36 +0000 (0:00:01.152) 1:02:43.361 ******** 2026-03-28 06:17:11.466463 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466475 | orchestrator | 2026-03-28 06:17:11.466488 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:17:11.466501 | orchestrator | Saturday 28 March 2026 06:16:38 +0000 (0:00:01.649) 1:02:45.011 ******** 2026-03-28 06:17:11.466514 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466526 | orchestrator | 2026-03-28 06:17:11.466537 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:17:11.466548 | orchestrator | Saturday 28 March 2026 06:16:40 +0000 (0:00:01.535) 1:02:46.546 ******** 2026-03-28 06:17:11.466559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466570 | orchestrator | 2026-03-28 06:17:11.466581 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:17:11.466592 | orchestrator | Saturday 28 March 2026 06:16:41 +0000 (0:00:01.175) 1:02:47.722 ******** 2026-03-28 06:17:11.466603 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466614 | orchestrator | 2026-03-28 06:17:11.466625 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:17:11.466636 | orchestrator | Saturday 28 March 2026 06:16:42 +0000 (0:00:01.202) 1:02:48.924 ******** 2026-03-28 06:17:11.466647 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466658 | orchestrator | 2026-03-28 06:17:11.466669 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:17:11.466680 | orchestrator | Saturday 28 March 2026 06:16:43 +0000 (0:00:01.201) 1:02:50.126 ******** 2026-03-28 06:17:11.466691 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466702 | orchestrator | 2026-03-28 06:17:11.466713 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:17:11.466725 | orchestrator | Saturday 28 March 2026 06:16:44 +0000 (0:00:01.172) 1:02:51.298 ******** 2026-03-28 06:17:11.466744 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466755 | orchestrator | 2026-03-28 06:17:11.466783 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:17:11.466795 | orchestrator | Saturday 28 March 2026 06:16:46 +0000 (0:00:01.139) 1:02:52.438 ******** 2026-03-28 06:17:11.466824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466836 | orchestrator | 2026-03-28 06:17:11.466847 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:17:11.466858 | orchestrator | Saturday 28 March 2026 06:16:47 +0000 (0:00:01.172) 1:02:53.610 ******** 2026-03-28 06:17:11.466882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466894 | orchestrator | 2026-03-28 06:17:11.466915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:17:11.466926 | orchestrator | Saturday 28 March 2026 06:16:48 +0000 (0:00:01.103) 1:02:54.714 ******** 2026-03-28 06:17:11.466937 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.466948 | orchestrator | 2026-03-28 06:17:11.466959 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:17:11.466970 | orchestrator | Saturday 28 March 2026 06:16:49 +0000 (0:00:01.193) 1:02:55.908 ******** 2026-03-28 06:17:11.466981 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.466992 | orchestrator | 2026-03-28 06:17:11.467003 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:17:11.467020 | orchestrator | Saturday 28 March 2026 06:16:50 +0000 (0:00:01.201) 1:02:57.109 ******** 2026-03-28 06:17:11.467032 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.467043 | orchestrator | 2026-03-28 06:17:11.467054 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:17:11.467065 | orchestrator | Saturday 28 March 2026 06:16:51 +0000 (0:00:01.210) 1:02:58.319 ******** 2026-03-28 06:17:11.467076 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467086 | orchestrator | 2026-03-28 06:17:11.467097 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:17:11.467108 | orchestrator | Saturday 28 March 2026 06:16:53 +0000 (0:00:01.149) 1:02:59.469 ******** 2026-03-28 06:17:11.467119 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467130 | orchestrator | 2026-03-28 06:17:11.467141 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:17:11.467152 | orchestrator | Saturday 28 March 2026 06:16:54 +0000 (0:00:01.150) 1:03:00.620 ******** 2026-03-28 06:17:11.467162 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467173 | orchestrator | 2026-03-28 06:17:11.467184 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:17:11.467195 | orchestrator | Saturday 28 March 2026 06:16:55 +0000 (0:00:01.217) 1:03:01.838 ******** 2026-03-28 06:17:11.467206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467217 | orchestrator | 2026-03-28 06:17:11.467228 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:17:11.467239 | orchestrator | Saturday 28 March 2026 06:16:56 +0000 (0:00:01.124) 1:03:02.962 ******** 2026-03-28 06:17:11.467249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467260 | orchestrator | 2026-03-28 06:17:11.467271 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:17:11.467282 | orchestrator | Saturday 28 March 2026 06:16:57 +0000 (0:00:01.162) 1:03:04.124 ******** 2026-03-28 06:17:11.467293 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467304 | orchestrator | 2026-03-28 06:17:11.467324 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:17:11.467343 | orchestrator | Saturday 28 March 2026 06:16:58 +0000 (0:00:01.192) 1:03:05.317 ******** 2026-03-28 06:17:11.467362 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467380 | orchestrator | 2026-03-28 06:17:11.467400 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:17:11.467421 | orchestrator | Saturday 28 March 2026 06:17:00 +0000 (0:00:01.150) 1:03:06.467 ******** 2026-03-28 06:17:11.467445 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467457 | orchestrator | 2026-03-28 06:17:11.467468 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:17:11.467479 | orchestrator | Saturday 28 March 2026 06:17:01 +0000 (0:00:01.205) 1:03:07.673 ******** 2026-03-28 06:17:11.467490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467501 | orchestrator | 2026-03-28 06:17:11.467758 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:17:11.467771 | orchestrator | Saturday 28 March 2026 06:17:02 +0000 (0:00:01.183) 1:03:08.857 ******** 2026-03-28 06:17:11.467782 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467793 | orchestrator | 2026-03-28 06:17:11.467804 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:17:11.467838 | orchestrator | Saturday 28 March 2026 06:17:03 +0000 (0:00:01.142) 1:03:10.000 ******** 2026-03-28 06:17:11.467849 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467860 | orchestrator | 2026-03-28 06:17:11.467871 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:17:11.467882 | orchestrator | Saturday 28 March 2026 06:17:04 +0000 (0:00:01.162) 1:03:11.163 ******** 2026-03-28 06:17:11.467893 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:11.467904 | orchestrator | 2026-03-28 06:17:11.467915 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:17:11.467926 | orchestrator | Saturday 28 March 2026 06:17:05 +0000 (0:00:01.192) 1:03:12.356 ******** 2026-03-28 06:17:11.467937 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.467948 | orchestrator | 2026-03-28 06:17:11.467959 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:17:11.467970 | orchestrator | Saturday 28 March 2026 06:17:07 +0000 (0:00:01.944) 1:03:14.301 ******** 2026-03-28 06:17:11.467981 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:11.467992 | orchestrator | 2026-03-28 06:17:11.468002 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:17:11.468013 | orchestrator | Saturday 28 March 2026 06:17:10 +0000 (0:00:02.266) 1:03:16.567 ******** 2026-03-28 06:17:11.468024 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-28 06:17:11.468035 | orchestrator | 2026-03-28 06:17:11.468046 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:17:11.468069 | orchestrator | Saturday 28 March 2026 06:17:11 +0000 (0:00:01.316) 1:03:17.884 ******** 2026-03-28 06:17:58.753967 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754161 | orchestrator | 2026-03-28 06:17:58.754181 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:17:58.754194 | orchestrator | Saturday 28 March 2026 06:17:12 +0000 (0:00:01.127) 1:03:19.011 ******** 2026-03-28 06:17:58.754206 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754217 | orchestrator | 2026-03-28 06:17:58.754229 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:17:58.754240 | orchestrator | Saturday 28 March 2026 06:17:13 +0000 (0:00:01.136) 1:03:20.147 ******** 2026-03-28 06:17:58.754251 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:17:58.754263 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:17:58.754275 | orchestrator | 2026-03-28 06:17:58.754286 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:17:58.754297 | orchestrator | Saturday 28 March 2026 06:17:15 +0000 (0:00:01.882) 1:03:22.030 ******** 2026-03-28 06:17:58.754323 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:58.754336 | orchestrator | 2026-03-28 06:17:58.754347 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:17:58.754358 | orchestrator | Saturday 28 March 2026 06:17:17 +0000 (0:00:01.513) 1:03:23.544 ******** 2026-03-28 06:17:58.754369 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754405 | orchestrator | 2026-03-28 06:17:58.754419 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:17:58.754431 | orchestrator | Saturday 28 March 2026 06:17:18 +0000 (0:00:01.141) 1:03:24.685 ******** 2026-03-28 06:17:58.754445 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754457 | orchestrator | 2026-03-28 06:17:58.754470 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:17:58.754483 | orchestrator | Saturday 28 March 2026 06:17:19 +0000 (0:00:01.189) 1:03:25.875 ******** 2026-03-28 06:17:58.754496 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754508 | orchestrator | 2026-03-28 06:17:58.754521 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:17:58.754534 | orchestrator | Saturday 28 March 2026 06:17:20 +0000 (0:00:01.165) 1:03:27.040 ******** 2026-03-28 06:17:58.754547 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-28 06:17:58.754561 | orchestrator | 2026-03-28 06:17:58.754574 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:17:58.754587 | orchestrator | Saturday 28 March 2026 06:17:21 +0000 (0:00:01.183) 1:03:28.224 ******** 2026-03-28 06:17:58.754599 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:58.754612 | orchestrator | 2026-03-28 06:17:58.754625 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:17:58.754637 | orchestrator | Saturday 28 March 2026 06:17:23 +0000 (0:00:01.712) 1:03:29.936 ******** 2026-03-28 06:17:58.754649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:17:58.754660 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:17:58.754670 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:17:58.754681 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754692 | orchestrator | 2026-03-28 06:17:58.754703 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:17:58.754714 | orchestrator | Saturday 28 March 2026 06:17:24 +0000 (0:00:01.265) 1:03:31.202 ******** 2026-03-28 06:17:58.754725 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754735 | orchestrator | 2026-03-28 06:17:58.754746 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:17:58.754757 | orchestrator | Saturday 28 March 2026 06:17:25 +0000 (0:00:01.166) 1:03:32.368 ******** 2026-03-28 06:17:58.754767 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754778 | orchestrator | 2026-03-28 06:17:58.754789 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:17:58.754823 | orchestrator | Saturday 28 March 2026 06:17:27 +0000 (0:00:01.283) 1:03:33.652 ******** 2026-03-28 06:17:58.754835 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754846 | orchestrator | 2026-03-28 06:17:58.754856 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:17:58.754867 | orchestrator | Saturday 28 March 2026 06:17:28 +0000 (0:00:01.129) 1:03:34.782 ******** 2026-03-28 06:17:58.754878 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754889 | orchestrator | 2026-03-28 06:17:58.754900 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:17:58.754911 | orchestrator | Saturday 28 March 2026 06:17:29 +0000 (0:00:01.227) 1:03:36.009 ******** 2026-03-28 06:17:58.754922 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.754932 | orchestrator | 2026-03-28 06:17:58.754943 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:17:58.754954 | orchestrator | Saturday 28 March 2026 06:17:30 +0000 (0:00:01.273) 1:03:37.283 ******** 2026-03-28 06:17:58.754965 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:58.754976 | orchestrator | 2026-03-28 06:17:58.754987 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:17:58.754997 | orchestrator | Saturday 28 March 2026 06:17:33 +0000 (0:00:02.489) 1:03:39.773 ******** 2026-03-28 06:17:58.755016 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:58.755027 | orchestrator | 2026-03-28 06:17:58.755038 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:17:58.755049 | orchestrator | Saturday 28 March 2026 06:17:34 +0000 (0:00:01.195) 1:03:40.968 ******** 2026-03-28 06:17:58.755060 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-28 06:17:58.755071 | orchestrator | 2026-03-28 06:17:58.755082 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:17:58.755112 | orchestrator | Saturday 28 March 2026 06:17:35 +0000 (0:00:01.188) 1:03:42.157 ******** 2026-03-28 06:17:58.755124 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755135 | orchestrator | 2026-03-28 06:17:58.755146 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:17:58.755157 | orchestrator | Saturday 28 March 2026 06:17:36 +0000 (0:00:01.143) 1:03:43.300 ******** 2026-03-28 06:17:58.755168 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755178 | orchestrator | 2026-03-28 06:17:58.755190 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:17:58.755201 | orchestrator | Saturday 28 March 2026 06:17:38 +0000 (0:00:01.167) 1:03:44.467 ******** 2026-03-28 06:17:58.755211 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755222 | orchestrator | 2026-03-28 06:17:58.755233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:17:58.755244 | orchestrator | Saturday 28 March 2026 06:17:39 +0000 (0:00:01.136) 1:03:45.604 ******** 2026-03-28 06:17:58.755255 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755266 | orchestrator | 2026-03-28 06:17:58.755277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:17:58.755294 | orchestrator | Saturday 28 March 2026 06:17:40 +0000 (0:00:01.174) 1:03:46.778 ******** 2026-03-28 06:17:58.755305 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755316 | orchestrator | 2026-03-28 06:17:58.755327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:17:58.755337 | orchestrator | Saturday 28 March 2026 06:17:41 +0000 (0:00:01.140) 1:03:47.919 ******** 2026-03-28 06:17:58.755348 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755359 | orchestrator | 2026-03-28 06:17:58.755370 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:17:58.755381 | orchestrator | Saturday 28 March 2026 06:17:42 +0000 (0:00:01.194) 1:03:49.114 ******** 2026-03-28 06:17:58.755391 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755402 | orchestrator | 2026-03-28 06:17:58.755413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:17:58.755424 | orchestrator | Saturday 28 March 2026 06:17:43 +0000 (0:00:01.172) 1:03:50.287 ******** 2026-03-28 06:17:58.755435 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:17:58.755446 | orchestrator | 2026-03-28 06:17:58.755456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:17:58.755467 | orchestrator | Saturday 28 March 2026 06:17:45 +0000 (0:00:01.194) 1:03:51.481 ******** 2026-03-28 06:17:58.755478 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:17:58.755489 | orchestrator | 2026-03-28 06:17:58.755500 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:17:58.755511 | orchestrator | Saturday 28 March 2026 06:17:46 +0000 (0:00:01.207) 1:03:52.689 ******** 2026-03-28 06:17:58.755522 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-28 06:17:58.755533 | orchestrator | 2026-03-28 06:17:58.755544 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:17:58.755555 | orchestrator | Saturday 28 March 2026 06:17:47 +0000 (0:00:01.225) 1:03:53.914 ******** 2026-03-28 06:17:58.755566 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 06:17:58.755578 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 06:17:58.755595 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 06:17:58.755607 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 06:17:58.755617 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 06:17:58.755628 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 06:17:58.755639 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 06:17:58.755650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:17:58.755662 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:17:58.755673 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:17:58.755683 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:17:58.755694 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:17:58.755705 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:17:58.755716 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:17:58.755727 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 06:17:58.755738 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 06:17:58.755749 | orchestrator | 2026-03-28 06:17:58.755760 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:17:58.755771 | orchestrator | Saturday 28 March 2026 06:17:54 +0000 (0:00:06.630) 1:04:00.545 ******** 2026-03-28 06:17:58.755782 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-28 06:17:58.755793 | orchestrator | 2026-03-28 06:17:58.755821 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:17:58.755832 | orchestrator | Saturday 28 March 2026 06:17:55 +0000 (0:00:01.144) 1:04:01.690 ******** 2026-03-28 06:17:58.755843 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:17:58.755854 | orchestrator | 2026-03-28 06:17:58.755865 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:17:58.755876 | orchestrator | Saturday 28 March 2026 06:17:56 +0000 (0:00:01.497) 1:04:03.187 ******** 2026-03-28 06:17:58.755887 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:17:58.755898 | orchestrator | 2026-03-28 06:17:58.755909 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:17:58.755927 | orchestrator | Saturday 28 March 2026 06:17:58 +0000 (0:00:01.980) 1:04:05.168 ******** 2026-03-28 06:18:49.547371 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547493 | orchestrator | 2026-03-28 06:18:49.547510 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:18:49.547524 | orchestrator | Saturday 28 March 2026 06:17:59 +0000 (0:00:01.214) 1:04:06.383 ******** 2026-03-28 06:18:49.547537 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547548 | orchestrator | 2026-03-28 06:18:49.547560 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:18:49.547573 | orchestrator | Saturday 28 March 2026 06:18:01 +0000 (0:00:01.253) 1:04:07.637 ******** 2026-03-28 06:18:49.547584 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547595 | orchestrator | 2026-03-28 06:18:49.547606 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:18:49.547617 | orchestrator | Saturday 28 March 2026 06:18:02 +0000 (0:00:01.137) 1:04:08.775 ******** 2026-03-28 06:18:49.547629 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547641 | orchestrator | 2026-03-28 06:18:49.547652 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:18:49.547681 | orchestrator | Saturday 28 March 2026 06:18:03 +0000 (0:00:01.141) 1:04:09.916 ******** 2026-03-28 06:18:49.547693 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547730 | orchestrator | 2026-03-28 06:18:49.547742 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:18:49.547756 | orchestrator | Saturday 28 March 2026 06:18:04 +0000 (0:00:01.191) 1:04:11.107 ******** 2026-03-28 06:18:49.547767 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547778 | orchestrator | 2026-03-28 06:18:49.547789 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:18:49.547863 | orchestrator | Saturday 28 March 2026 06:18:05 +0000 (0:00:01.194) 1:04:12.301 ******** 2026-03-28 06:18:49.547876 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547887 | orchestrator | 2026-03-28 06:18:49.547899 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:18:49.547911 | orchestrator | Saturday 28 March 2026 06:18:06 +0000 (0:00:01.131) 1:04:13.433 ******** 2026-03-28 06:18:49.547922 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547935 | orchestrator | 2026-03-28 06:18:49.547947 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:18:49.547961 | orchestrator | Saturday 28 March 2026 06:18:08 +0000 (0:00:01.110) 1:04:14.544 ******** 2026-03-28 06:18:49.547974 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.547986 | orchestrator | 2026-03-28 06:18:49.547998 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:18:49.548010 | orchestrator | Saturday 28 March 2026 06:18:09 +0000 (0:00:01.123) 1:04:15.667 ******** 2026-03-28 06:18:49.548022 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548033 | orchestrator | 2026-03-28 06:18:49.548046 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:18:49.548059 | orchestrator | Saturday 28 March 2026 06:18:10 +0000 (0:00:01.121) 1:04:16.789 ******** 2026-03-28 06:18:49.548073 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548086 | orchestrator | 2026-03-28 06:18:49.548098 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:18:49.548110 | orchestrator | Saturday 28 March 2026 06:18:11 +0000 (0:00:01.138) 1:04:17.927 ******** 2026-03-28 06:18:49.548123 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:18:49.548136 | orchestrator | 2026-03-28 06:18:49.548149 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:18:49.548159 | orchestrator | Saturday 28 March 2026 06:18:15 +0000 (0:00:04.414) 1:04:22.342 ******** 2026-03-28 06:18:49.548171 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:18:49.548185 | orchestrator | 2026-03-28 06:18:49.548196 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:18:49.548210 | orchestrator | Saturday 28 March 2026 06:18:17 +0000 (0:00:01.184) 1:04:23.527 ******** 2026-03-28 06:18:49.548224 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-28 06:18:49.548241 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-28 06:18:49.548253 | orchestrator | 2026-03-28 06:18:49.548265 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:18:49.548276 | orchestrator | Saturday 28 March 2026 06:18:22 +0000 (0:00:05.137) 1:04:28.665 ******** 2026-03-28 06:18:49.548286 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548309 | orchestrator | 2026-03-28 06:18:49.548320 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:18:49.548331 | orchestrator | Saturday 28 March 2026 06:18:23 +0000 (0:00:01.234) 1:04:29.899 ******** 2026-03-28 06:18:49.548343 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548355 | orchestrator | 2026-03-28 06:18:49.548367 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:18:49.548400 | orchestrator | Saturday 28 March 2026 06:18:24 +0000 (0:00:01.162) 1:04:31.062 ******** 2026-03-28 06:18:49.548413 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548426 | orchestrator | 2026-03-28 06:18:49.548438 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:18:49.548449 | orchestrator | Saturday 28 March 2026 06:18:25 +0000 (0:00:01.178) 1:04:32.240 ******** 2026-03-28 06:18:49.548461 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548472 | orchestrator | 2026-03-28 06:18:49.548484 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:18:49.548495 | orchestrator | Saturday 28 March 2026 06:18:26 +0000 (0:00:01.165) 1:04:33.405 ******** 2026-03-28 06:18:49.548506 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548518 | orchestrator | 2026-03-28 06:18:49.548529 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:18:49.548539 | orchestrator | Saturday 28 March 2026 06:18:28 +0000 (0:00:01.199) 1:04:34.605 ******** 2026-03-28 06:18:49.548550 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:18:49.548562 | orchestrator | 2026-03-28 06:18:49.548584 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:18:49.548595 | orchestrator | Saturday 28 March 2026 06:18:29 +0000 (0:00:01.265) 1:04:35.870 ******** 2026-03-28 06:18:49.548608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:18:49.548619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:18:49.548630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:18:49.548641 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548652 | orchestrator | 2026-03-28 06:18:49.548663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:18:49.548674 | orchestrator | Saturday 28 March 2026 06:18:30 +0000 (0:00:01.556) 1:04:37.427 ******** 2026-03-28 06:18:49.548685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:18:49.548697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:18:49.548707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:18:49.548718 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548729 | orchestrator | 2026-03-28 06:18:49.548740 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:18:49.548751 | orchestrator | Saturday 28 March 2026 06:18:32 +0000 (0:00:01.486) 1:04:38.914 ******** 2026-03-28 06:18:49.548763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 06:18:49.548774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 06:18:49.548786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 06:18:49.548830 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.548843 | orchestrator | 2026-03-28 06:18:49.548854 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:18:49.548864 | orchestrator | Saturday 28 March 2026 06:18:33 +0000 (0:00:01.391) 1:04:40.305 ******** 2026-03-28 06:18:49.548877 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:18:49.548890 | orchestrator | 2026-03-28 06:18:49.548901 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:18:49.548912 | orchestrator | Saturday 28 March 2026 06:18:35 +0000 (0:00:01.226) 1:04:41.532 ******** 2026-03-28 06:18:49.548924 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 06:18:49.548934 | orchestrator | 2026-03-28 06:18:49.548954 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:18:49.548964 | orchestrator | Saturday 28 March 2026 06:18:36 +0000 (0:00:01.498) 1:04:43.030 ******** 2026-03-28 06:18:49.548975 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:18:49.548986 | orchestrator | 2026-03-28 06:18:49.548997 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 06:18:49.549007 | orchestrator | Saturday 28 March 2026 06:18:38 +0000 (0:00:01.778) 1:04:44.809 ******** 2026-03-28 06:18:49.549017 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-28 06:18:49.549028 | orchestrator | 2026-03-28 06:18:49.549038 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:18:49.549048 | orchestrator | Saturday 28 March 2026 06:18:40 +0000 (0:00:01.653) 1:04:46.463 ******** 2026-03-28 06:18:49.549059 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:18:49.549071 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 06:18:49.549082 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:18:49.549093 | orchestrator | 2026-03-28 06:18:49.549103 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:18:49.549113 | orchestrator | Saturday 28 March 2026 06:18:43 +0000 (0:00:03.186) 1:04:49.649 ******** 2026-03-28 06:18:49.549123 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-28 06:18:49.549135 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 06:18:49.549145 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:18:49.549156 | orchestrator | 2026-03-28 06:18:49.549166 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 06:18:49.549175 | orchestrator | Saturday 28 March 2026 06:18:45 +0000 (0:00:01.977) 1:04:51.626 ******** 2026-03-28 06:18:49.549187 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:18:49.549196 | orchestrator | 2026-03-28 06:18:49.549206 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 06:18:49.549216 | orchestrator | Saturday 28 March 2026 06:18:46 +0000 (0:00:01.173) 1:04:52.800 ******** 2026-03-28 06:18:49.549226 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-28 06:18:49.549237 | orchestrator | 2026-03-28 06:18:49.549247 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 06:18:49.549257 | orchestrator | Saturday 28 March 2026 06:18:47 +0000 (0:00:01.535) 1:04:54.335 ******** 2026-03-28 06:18:49.549280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:20:06.301625 | orchestrator | 2026-03-28 06:20:06.301724 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 06:20:06.301742 | orchestrator | Saturday 28 March 2026 06:18:49 +0000 (0:00:01.629) 1:04:55.965 ******** 2026-03-28 06:20:06.301754 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:20:06.301767 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 06:20:06.301778 | orchestrator | 2026-03-28 06:20:06.301842 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:20:06.301854 | orchestrator | Saturday 28 March 2026 06:18:55 +0000 (0:00:06.081) 1:05:02.046 ******** 2026-03-28 06:20:06.301866 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:20:06.301892 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:20:06.301903 | orchestrator | 2026-03-28 06:20:06.301915 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:20:06.301926 | orchestrator | Saturday 28 March 2026 06:18:58 +0000 (0:00:03.328) 1:05:05.375 ******** 2026-03-28 06:20:06.301938 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-28 06:20:06.301970 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:20:06.301982 | orchestrator | 2026-03-28 06:20:06.301993 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 06:20:06.302004 | orchestrator | Saturday 28 March 2026 06:19:01 +0000 (0:00:02.065) 1:05:07.441 ******** 2026-03-28 06:20:06.302064 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-28 06:20:06.302077 | orchestrator | 2026-03-28 06:20:06.302088 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 06:20:06.302099 | orchestrator | Saturday 28 March 2026 06:19:02 +0000 (0:00:01.669) 1:05:09.110 ******** 2026-03-28 06:20:06.302110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302166 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:20:06.302177 | orchestrator | 2026-03-28 06:20:06.302190 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 06:20:06.302204 | orchestrator | Saturday 28 March 2026 06:19:04 +0000 (0:00:01.597) 1:05:10.708 ******** 2026-03-28 06:20:06.302217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:20:06.302281 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:20:06.302293 | orchestrator | 2026-03-28 06:20:06.302308 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 06:20:06.302320 | orchestrator | Saturday 28 March 2026 06:19:06 +0000 (0:00:01.728) 1:05:12.437 ******** 2026-03-28 06:20:06.302333 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:20:06.302347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:20:06.302359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:20:06.302372 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:20:06.302385 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:20:06.302398 | orchestrator | 2026-03-28 06:20:06.302411 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 06:20:06.302449 | orchestrator | Saturday 28 March 2026 06:19:38 +0000 (0:00:32.185) 1:05:44.622 ******** 2026-03-28 06:20:06.302464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:20:06.302478 | orchestrator | 2026-03-28 06:20:06.302491 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 06:20:06.302503 | orchestrator | Saturday 28 March 2026 06:19:39 +0000 (0:00:01.217) 1:05:45.840 ******** 2026-03-28 06:20:06.302517 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:20:06.302530 | orchestrator | 2026-03-28 06:20:06.302542 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 06:20:06.302555 | orchestrator | Saturday 28 March 2026 06:19:40 +0000 (0:00:01.125) 1:05:46.965 ******** 2026-03-28 06:20:06.302566 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-28 06:20:06.302577 | orchestrator | 2026-03-28 06:20:06.302588 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 06:20:06.302599 | orchestrator | Saturday 28 March 2026 06:19:41 +0000 (0:00:01.466) 1:05:48.432 ******** 2026-03-28 06:20:06.302616 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-28 06:20:06.302627 | orchestrator | 2026-03-28 06:20:06.302638 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 06:20:06.302650 | orchestrator | Saturday 28 March 2026 06:19:43 +0000 (0:00:01.489) 1:05:49.921 ******** 2026-03-28 06:20:06.302661 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:20:06.302672 | orchestrator | 2026-03-28 06:20:06.302683 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 06:20:06.302694 | orchestrator | Saturday 28 March 2026 06:19:45 +0000 (0:00:02.101) 1:05:52.023 ******** 2026-03-28 06:20:06.302705 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:20:06.302716 | orchestrator | 2026-03-28 06:20:06.302727 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 06:20:06.302738 | orchestrator | Saturday 28 March 2026 06:19:47 +0000 (0:00:01.943) 1:05:53.966 ******** 2026-03-28 06:20:06.302749 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:20:06.302760 | orchestrator | 2026-03-28 06:20:06.302771 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 06:20:06.302782 | orchestrator | Saturday 28 March 2026 06:19:49 +0000 (0:00:02.287) 1:05:56.253 ******** 2026-03-28 06:20:06.302813 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 06:20:06.302824 | orchestrator | 2026-03-28 06:20:06.302835 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-28 06:20:06.302846 | orchestrator | 2026-03-28 06:20:06.302858 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:20:06.302868 | orchestrator | Saturday 28 March 2026 06:19:52 +0000 (0:00:03.080) 1:05:59.334 ******** 2026-03-28 06:20:06.302880 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-28 06:20:06.302891 | orchestrator | 2026-03-28 06:20:06.302901 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:20:06.302913 | orchestrator | Saturday 28 March 2026 06:19:54 +0000 (0:00:01.238) 1:06:00.573 ******** 2026-03-28 06:20:06.302924 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.302935 | orchestrator | 2026-03-28 06:20:06.302946 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:20:06.302957 | orchestrator | Saturday 28 March 2026 06:19:55 +0000 (0:00:01.500) 1:06:02.073 ******** 2026-03-28 06:20:06.302968 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.302978 | orchestrator | 2026-03-28 06:20:06.302990 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:20:06.303000 | orchestrator | Saturday 28 March 2026 06:19:56 +0000 (0:00:01.189) 1:06:03.263 ******** 2026-03-28 06:20:06.303011 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.303022 | orchestrator | 2026-03-28 06:20:06.303033 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:20:06.303051 | orchestrator | Saturday 28 March 2026 06:19:58 +0000 (0:00:01.468) 1:06:04.731 ******** 2026-03-28 06:20:06.303062 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.303073 | orchestrator | 2026-03-28 06:20:06.303084 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:20:06.303095 | orchestrator | Saturday 28 March 2026 06:19:59 +0000 (0:00:01.159) 1:06:05.891 ******** 2026-03-28 06:20:06.303106 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.303117 | orchestrator | 2026-03-28 06:20:06.303128 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:20:06.303139 | orchestrator | Saturday 28 March 2026 06:20:00 +0000 (0:00:01.240) 1:06:07.131 ******** 2026-03-28 06:20:06.303150 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.303161 | orchestrator | 2026-03-28 06:20:06.303172 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:20:06.303183 | orchestrator | Saturday 28 March 2026 06:20:01 +0000 (0:00:01.228) 1:06:08.360 ******** 2026-03-28 06:20:06.303194 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:06.303205 | orchestrator | 2026-03-28 06:20:06.303216 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:20:06.303227 | orchestrator | Saturday 28 March 2026 06:20:03 +0000 (0:00:01.166) 1:06:09.526 ******** 2026-03-28 06:20:06.303238 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:06.303249 | orchestrator | 2026-03-28 06:20:06.303260 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:20:06.303272 | orchestrator | Saturday 28 March 2026 06:20:04 +0000 (0:00:01.156) 1:06:10.682 ******** 2026-03-28 06:20:06.303283 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:20:06.303293 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:20:06.303305 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:20:06.303316 | orchestrator | 2026-03-28 06:20:06.303327 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:20:06.303344 | orchestrator | Saturday 28 March 2026 06:20:06 +0000 (0:00:02.040) 1:06:12.722 ******** 2026-03-28 06:20:31.132329 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132410 | orchestrator | 2026-03-28 06:20:31.132418 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:20:31.132423 | orchestrator | Saturday 28 March 2026 06:20:07 +0000 (0:00:01.248) 1:06:13.971 ******** 2026-03-28 06:20:31.132428 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:20:31.132433 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:20:31.132437 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:20:31.132441 | orchestrator | 2026-03-28 06:20:31.132446 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:20:31.132450 | orchestrator | Saturday 28 March 2026 06:20:10 +0000 (0:00:02.882) 1:06:16.854 ******** 2026-03-28 06:20:31.132465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 06:20:31.132470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 06:20:31.132474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 06:20:31.132478 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132481 | orchestrator | 2026-03-28 06:20:31.132485 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:20:31.132489 | orchestrator | Saturday 28 March 2026 06:20:11 +0000 (0:00:01.397) 1:06:18.252 ******** 2026-03-28 06:20:31.132494 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132521 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132525 | orchestrator | 2026-03-28 06:20:31.132529 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:20:31.132533 | orchestrator | Saturday 28 March 2026 06:20:13 +0000 (0:00:01.621) 1:06:19.873 ******** 2026-03-28 06:20:31.132538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:31.132552 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132556 | orchestrator | 2026-03-28 06:20:31.132560 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:20:31.132564 | orchestrator | Saturday 28 March 2026 06:20:14 +0000 (0:00:01.174) 1:06:21.048 ******** 2026-03-28 06:20:31.132578 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:20:08.071416', 'end': '2026-03-28 06:20:08.130506', 'delta': '0:00:00.059090', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:20:31.132587 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:20:08.660881', 'end': '2026-03-28 06:20:08.704520', 'delta': '0:00:00.043639', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:20:31.132595 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:20:09.205935', 'end': '2026-03-28 06:20:09.258863', 'delta': '0:00:00.052928', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:20:31.132599 | orchestrator | 2026-03-28 06:20:31.132603 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:20:31.132607 | orchestrator | Saturday 28 March 2026 06:20:15 +0000 (0:00:01.238) 1:06:22.286 ******** 2026-03-28 06:20:31.132611 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132615 | orchestrator | 2026-03-28 06:20:31.132618 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:20:31.132622 | orchestrator | Saturday 28 March 2026 06:20:17 +0000 (0:00:01.298) 1:06:23.585 ******** 2026-03-28 06:20:31.132626 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132630 | orchestrator | 2026-03-28 06:20:31.132634 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:20:31.132638 | orchestrator | Saturday 28 March 2026 06:20:18 +0000 (0:00:01.274) 1:06:24.860 ******** 2026-03-28 06:20:31.132641 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132645 | orchestrator | 2026-03-28 06:20:31.132649 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:20:31.132653 | orchestrator | Saturday 28 March 2026 06:20:19 +0000 (0:00:01.190) 1:06:26.051 ******** 2026-03-28 06:20:31.132656 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:20:31.132660 | orchestrator | 2026-03-28 06:20:31.132664 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:20:31.132668 | orchestrator | Saturday 28 March 2026 06:20:21 +0000 (0:00:02.098) 1:06:28.149 ******** 2026-03-28 06:20:31.132672 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132675 | orchestrator | 2026-03-28 06:20:31.132679 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:20:31.132683 | orchestrator | Saturday 28 March 2026 06:20:22 +0000 (0:00:01.150) 1:06:29.300 ******** 2026-03-28 06:20:31.132687 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132690 | orchestrator | 2026-03-28 06:20:31.132694 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:20:31.132698 | orchestrator | Saturday 28 March 2026 06:20:24 +0000 (0:00:01.164) 1:06:30.465 ******** 2026-03-28 06:20:31.132702 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132705 | orchestrator | 2026-03-28 06:20:31.132709 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:20:31.132713 | orchestrator | Saturday 28 March 2026 06:20:25 +0000 (0:00:01.202) 1:06:31.668 ******** 2026-03-28 06:20:31.132717 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132721 | orchestrator | 2026-03-28 06:20:31.132724 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:20:31.132728 | orchestrator | Saturday 28 March 2026 06:20:26 +0000 (0:00:01.239) 1:06:32.907 ******** 2026-03-28 06:20:31.132732 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132735 | orchestrator | 2026-03-28 06:20:31.132739 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:20:31.132743 | orchestrator | Saturday 28 March 2026 06:20:27 +0000 (0:00:01.139) 1:06:34.046 ******** 2026-03-28 06:20:31.132747 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132750 | orchestrator | 2026-03-28 06:20:31.132754 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:20:31.132761 | orchestrator | Saturday 28 March 2026 06:20:28 +0000 (0:00:01.176) 1:06:35.222 ******** 2026-03-28 06:20:31.132765 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:31.132769 | orchestrator | 2026-03-28 06:20:31.132773 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:20:31.132776 | orchestrator | Saturday 28 March 2026 06:20:29 +0000 (0:00:01.125) 1:06:36.347 ******** 2026-03-28 06:20:31.132825 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:31.132830 | orchestrator | 2026-03-28 06:20:31.132834 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:20:31.132841 | orchestrator | Saturday 28 March 2026 06:20:31 +0000 (0:00:01.204) 1:06:37.552 ******** 2026-03-28 06:20:33.676126 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:33.676229 | orchestrator | 2026-03-28 06:20:33.676244 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:20:33.676258 | orchestrator | Saturday 28 March 2026 06:20:32 +0000 (0:00:01.162) 1:06:38.714 ******** 2026-03-28 06:20:33.676270 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:33.676283 | orchestrator | 2026-03-28 06:20:33.676295 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:20:33.676307 | orchestrator | Saturday 28 March 2026 06:20:33 +0000 (0:00:01.162) 1:06:39.876 ******** 2026-03-28 06:20:33.676338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}})  2026-03-28 06:20:33.676372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:20:33.676385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}})  2026-03-28 06:20:33.676418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:20:33.676480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}})  2026-03-28 06:20:33.676527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}})  2026-03-28 06:20:33.676546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:33.676578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:20:35.064973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:35.065072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:20:35.065088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:20:35.065125 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:35.065138 | orchestrator | 2026-03-28 06:20:35.065149 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:20:35.065160 | orchestrator | Saturday 28 March 2026 06:20:34 +0000 (0:00:01.368) 1:06:41.245 ******** 2026-03-28 06:20:35.065171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41', 'dm-uuid-LVM-4NeR7xBe05M5dAiGzRIflBeO6QI2q0ZiOo5EWC7zf8ek72Je67tF5vlmAAM4DcCM'], 'uuids': ['78dfabb1-bec0-4eb7-8e2f-19b8b1ef8260'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7', 'scsi-SQEMU_QEMU_HARDDISK_67aa0ce5-3e47-424e-8717-6160a44d1ef7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '67aa0ce5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065238 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Phgfal-rs0n-jm0I-UUyX-1JJi-JWkd-EglQc4', 'scsi-0QEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4', 'scsi-SQEMU_QEMU_HARDDISK_db1b5262-00e3-40b1-8f63-94df47115ae4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065271 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:35.065315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy', 'dm-uuid-CRYPT-LUKS2-5f0a17fd26524f70972a151d0475a726-yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--80a8d2d8--5d5c--5988--8f38--8985bde94181-osd--block--80a8d2d8--5d5c--5988--8f38--8985bde94181', 'dm-uuid-LVM-gEYfwj5eefYusGTWxNBXy936V1GPEovByNbcgApUvnk7fwjMu0DQ71yHTSDBrCGy'], 'uuids': ['5f0a17fd-2652-4f70-972a-151d0475a726'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'db1b5262', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yNbcgA-pUvn-k7fw-jMu0-DQ71-yHTS-DBrCGy']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qEFUvf-c5aO-OUue-n5Jk-NOzl-8Aii-1W4rNG', 'scsi-0QEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab', 'scsi-SQEMU_QEMU_HARDDISK_c6cb080e-98ea-450b-9996-59c87757dbab'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c6cb080e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41-osd--block--9e2c40d7--ed5b--5b0c--9c02--6c53c9658e41']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2896204d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1', 'scsi-SQEMU_QEMU_HARDDISK_2896204d-ece7-4cc8-bdd6-31efe6d1f785-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM', 'dm-uuid-CRYPT-LUKS2-78dfabb1bec04eb78e2f19b8b1ef8260-Oo5EWC-7zf8-ek72-Je67-tF5v-lmAA-M4DcCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:20:40.541882 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:20:40.541897 | orchestrator | 2026-03-28 06:20:40.541909 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:20:40.541923 | orchestrator | Saturday 28 March 2026 06:20:36 +0000 (0:00:01.439) 1:06:42.684 ******** 2026-03-28 06:20:40.541936 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:40.541949 | orchestrator | 2026-03-28 06:20:40.541962 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:20:40.541975 | orchestrator | Saturday 28 March 2026 06:20:37 +0000 (0:00:01.524) 1:06:44.208 ******** 2026-03-28 06:20:40.541987 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:40.542000 | orchestrator | 2026-03-28 06:20:40.542012 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:20:40.542099 | orchestrator | Saturday 28 March 2026 06:20:38 +0000 (0:00:01.201) 1:06:45.410 ******** 2026-03-28 06:20:40.542121 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:20:40.542134 | orchestrator | 2026-03-28 06:20:40.542146 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:20:40.542169 | orchestrator | Saturday 28 March 2026 06:20:40 +0000 (0:00:01.557) 1:06:46.967 ******** 2026-03-28 06:21:24.712152 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712266 | orchestrator | 2026-03-28 06:21:24.712282 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:21:24.712295 | orchestrator | Saturday 28 March 2026 06:20:41 +0000 (0:00:01.169) 1:06:48.137 ******** 2026-03-28 06:21:24.712306 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712317 | orchestrator | 2026-03-28 06:21:24.712329 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:21:24.712340 | orchestrator | Saturday 28 March 2026 06:20:43 +0000 (0:00:01.707) 1:06:49.844 ******** 2026-03-28 06:21:24.712351 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712362 | orchestrator | 2026-03-28 06:21:24.712372 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:21:24.712384 | orchestrator | Saturday 28 March 2026 06:20:44 +0000 (0:00:01.152) 1:06:50.997 ******** 2026-03-28 06:21:24.712395 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 06:21:24.712407 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 06:21:24.712418 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 06:21:24.712429 | orchestrator | 2026-03-28 06:21:24.712440 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:21:24.712451 | orchestrator | Saturday 28 March 2026 06:20:46 +0000 (0:00:01.748) 1:06:52.745 ******** 2026-03-28 06:21:24.712462 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 06:21:24.712473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 06:21:24.712484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 06:21:24.712495 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712506 | orchestrator | 2026-03-28 06:21:24.712516 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:21:24.712527 | orchestrator | Saturday 28 March 2026 06:20:47 +0000 (0:00:01.214) 1:06:53.960 ******** 2026-03-28 06:21:24.712538 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-28 06:21:24.712550 | orchestrator | 2026-03-28 06:21:24.712562 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:21:24.712574 | orchestrator | Saturday 28 March 2026 06:20:48 +0000 (0:00:01.168) 1:06:55.128 ******** 2026-03-28 06:21:24.712585 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712596 | orchestrator | 2026-03-28 06:21:24.712607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:21:24.712618 | orchestrator | Saturday 28 March 2026 06:20:49 +0000 (0:00:01.171) 1:06:56.300 ******** 2026-03-28 06:21:24.712629 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712640 | orchestrator | 2026-03-28 06:21:24.712651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:21:24.712662 | orchestrator | Saturday 28 March 2026 06:20:51 +0000 (0:00:01.183) 1:06:57.483 ******** 2026-03-28 06:21:24.712673 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712686 | orchestrator | 2026-03-28 06:21:24.712699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:21:24.712712 | orchestrator | Saturday 28 March 2026 06:20:52 +0000 (0:00:01.170) 1:06:58.654 ******** 2026-03-28 06:21:24.712724 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:21:24.712737 | orchestrator | 2026-03-28 06:21:24.712750 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:21:24.712762 | orchestrator | Saturday 28 March 2026 06:20:53 +0000 (0:00:01.269) 1:06:59.923 ******** 2026-03-28 06:21:24.712831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:21:24.712847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:21:24.712865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:21:24.712920 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.712940 | orchestrator | 2026-03-28 06:21:24.712959 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:21:24.712996 | orchestrator | Saturday 28 March 2026 06:20:54 +0000 (0:00:01.442) 1:07:01.366 ******** 2026-03-28 06:21:24.713015 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:21:24.713033 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:21:24.713050 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:21:24.713066 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.713082 | orchestrator | 2026-03-28 06:21:24.713098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:21:24.713114 | orchestrator | Saturday 28 March 2026 06:20:56 +0000 (0:00:01.836) 1:07:03.202 ******** 2026-03-28 06:21:24.713129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:21:24.713146 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:21:24.713164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:21:24.713182 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.713198 | orchestrator | 2026-03-28 06:21:24.713216 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:21:24.713232 | orchestrator | Saturday 28 March 2026 06:20:58 +0000 (0:00:01.817) 1:07:05.020 ******** 2026-03-28 06:21:24.713249 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:21:24.713267 | orchestrator | 2026-03-28 06:21:24.713286 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:21:24.713303 | orchestrator | Saturday 28 March 2026 06:20:59 +0000 (0:00:01.293) 1:07:06.314 ******** 2026-03-28 06:21:24.713321 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 06:21:24.713339 | orchestrator | 2026-03-28 06:21:24.713359 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:21:24.713378 | orchestrator | Saturday 28 March 2026 06:21:01 +0000 (0:00:01.520) 1:07:07.834 ******** 2026-03-28 06:21:24.713420 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:21:24.713433 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:21:24.713444 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:21:24.713474 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:21:24.713485 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 06:21:24.713496 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:21:24.713507 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:21:24.713530 | orchestrator | 2026-03-28 06:21:24.713541 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:21:24.713551 | orchestrator | Saturday 28 March 2026 06:21:03 +0000 (0:00:01.905) 1:07:09.739 ******** 2026-03-28 06:21:24.713563 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:21:24.713573 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:21:24.713584 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:21:24.713595 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:21:24.713606 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-28 06:21:24.713617 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 06:21:24.713641 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:21:24.713652 | orchestrator | 2026-03-28 06:21:24.713663 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-28 06:21:24.713674 | orchestrator | Saturday 28 March 2026 06:21:05 +0000 (0:00:02.303) 1:07:12.042 ******** 2026-03-28 06:21:24.713685 | orchestrator | changed: [testbed-node-4] 2026-03-28 06:21:24.713696 | orchestrator | 2026-03-28 06:21:24.713707 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-28 06:21:24.713718 | orchestrator | Saturday 28 March 2026 06:21:07 +0000 (0:00:01.907) 1:07:13.950 ******** 2026-03-28 06:21:24.713729 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:21:24.713741 | orchestrator | 2026-03-28 06:21:24.713752 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-28 06:21:24.713763 | orchestrator | Saturday 28 March 2026 06:21:11 +0000 (0:00:03.611) 1:07:17.561 ******** 2026-03-28 06:21:24.713774 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:21:24.713821 | orchestrator | 2026-03-28 06:21:24.713833 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:21:24.713844 | orchestrator | Saturday 28 March 2026 06:21:13 +0000 (0:00:01.909) 1:07:19.470 ******** 2026-03-28 06:21:24.713855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-28 06:21:24.713867 | orchestrator | 2026-03-28 06:21:24.713878 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:21:24.713889 | orchestrator | Saturday 28 March 2026 06:21:14 +0000 (0:00:01.184) 1:07:20.655 ******** 2026-03-28 06:21:24.713899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-28 06:21:24.713911 | orchestrator | 2026-03-28 06:21:24.713921 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:21:24.713932 | orchestrator | Saturday 28 March 2026 06:21:15 +0000 (0:00:01.186) 1:07:21.841 ******** 2026-03-28 06:21:24.713952 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.713963 | orchestrator | 2026-03-28 06:21:24.713974 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:21:24.713985 | orchestrator | Saturday 28 March 2026 06:21:16 +0000 (0:00:01.139) 1:07:22.980 ******** 2026-03-28 06:21:24.713996 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:21:24.714010 | orchestrator | 2026-03-28 06:21:24.714087 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:21:24.714107 | orchestrator | Saturday 28 March 2026 06:21:18 +0000 (0:00:01.611) 1:07:24.592 ******** 2026-03-28 06:21:24.714125 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:21:24.714256 | orchestrator | 2026-03-28 06:21:24.714276 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:21:24.714287 | orchestrator | Saturday 28 March 2026 06:21:19 +0000 (0:00:01.510) 1:07:26.102 ******** 2026-03-28 06:21:24.714298 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:21:24.714309 | orchestrator | 2026-03-28 06:21:24.714320 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:21:24.714330 | orchestrator | Saturday 28 March 2026 06:21:21 +0000 (0:00:01.580) 1:07:27.683 ******** 2026-03-28 06:21:24.714341 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.714352 | orchestrator | 2026-03-28 06:21:24.714363 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:21:24.714374 | orchestrator | Saturday 28 March 2026 06:21:22 +0000 (0:00:01.143) 1:07:28.826 ******** 2026-03-28 06:21:24.714385 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.714396 | orchestrator | 2026-03-28 06:21:24.714406 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:21:24.714430 | orchestrator | Saturday 28 March 2026 06:21:23 +0000 (0:00:01.125) 1:07:29.952 ******** 2026-03-28 06:21:24.714441 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:21:24.714452 | orchestrator | 2026-03-28 06:21:24.714463 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:21:24.714487 | orchestrator | Saturday 28 March 2026 06:21:24 +0000 (0:00:01.179) 1:07:31.131 ******** 2026-03-28 06:22:05.123951 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124069 | orchestrator | 2026-03-28 06:22:05.124086 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:22:05.124126 | orchestrator | Saturday 28 March 2026 06:21:26 +0000 (0:00:01.613) 1:07:32.746 ******** 2026-03-28 06:22:05.124139 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124150 | orchestrator | 2026-03-28 06:22:05.124162 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:22:05.124173 | orchestrator | Saturday 28 March 2026 06:21:27 +0000 (0:00:01.563) 1:07:34.309 ******** 2026-03-28 06:22:05.124185 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124197 | orchestrator | 2026-03-28 06:22:05.124208 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:22:05.124219 | orchestrator | Saturday 28 March 2026 06:21:28 +0000 (0:00:00.804) 1:07:35.114 ******** 2026-03-28 06:22:05.124230 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124241 | orchestrator | 2026-03-28 06:22:05.124252 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:22:05.124264 | orchestrator | Saturday 28 March 2026 06:21:29 +0000 (0:00:00.813) 1:07:35.928 ******** 2026-03-28 06:22:05.124275 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124286 | orchestrator | 2026-03-28 06:22:05.124297 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:22:05.124308 | orchestrator | Saturday 28 March 2026 06:21:30 +0000 (0:00:00.809) 1:07:36.737 ******** 2026-03-28 06:22:05.124319 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124330 | orchestrator | 2026-03-28 06:22:05.124341 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:22:05.124352 | orchestrator | Saturday 28 March 2026 06:21:31 +0000 (0:00:00.801) 1:07:37.539 ******** 2026-03-28 06:22:05.124363 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124374 | orchestrator | 2026-03-28 06:22:05.124385 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:22:05.124396 | orchestrator | Saturday 28 March 2026 06:21:31 +0000 (0:00:00.811) 1:07:38.350 ******** 2026-03-28 06:22:05.124407 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124418 | orchestrator | 2026-03-28 06:22:05.124429 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:22:05.124440 | orchestrator | Saturday 28 March 2026 06:21:32 +0000 (0:00:00.884) 1:07:39.234 ******** 2026-03-28 06:22:05.124452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124463 | orchestrator | 2026-03-28 06:22:05.124474 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:22:05.124485 | orchestrator | Saturday 28 March 2026 06:21:33 +0000 (0:00:00.779) 1:07:40.014 ******** 2026-03-28 06:22:05.124499 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124512 | orchestrator | 2026-03-28 06:22:05.124525 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:22:05.124538 | orchestrator | Saturday 28 March 2026 06:21:34 +0000 (0:00:00.785) 1:07:40.799 ******** 2026-03-28 06:22:05.124551 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124564 | orchestrator | 2026-03-28 06:22:05.124576 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:22:05.124589 | orchestrator | Saturday 28 March 2026 06:21:35 +0000 (0:00:00.805) 1:07:41.605 ******** 2026-03-28 06:22:05.124603 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.124616 | orchestrator | 2026-03-28 06:22:05.124629 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:22:05.124666 | orchestrator | Saturday 28 March 2026 06:21:35 +0000 (0:00:00.802) 1:07:42.408 ******** 2026-03-28 06:22:05.124678 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124689 | orchestrator | 2026-03-28 06:22:05.124700 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:22:05.124711 | orchestrator | Saturday 28 March 2026 06:21:36 +0000 (0:00:00.762) 1:07:43.170 ******** 2026-03-28 06:22:05.124722 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124733 | orchestrator | 2026-03-28 06:22:05.124744 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:22:05.124810 | orchestrator | Saturday 28 March 2026 06:21:37 +0000 (0:00:00.796) 1:07:43.967 ******** 2026-03-28 06:22:05.124824 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124835 | orchestrator | 2026-03-28 06:22:05.124846 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:22:05.124857 | orchestrator | Saturday 28 March 2026 06:21:38 +0000 (0:00:00.780) 1:07:44.747 ******** 2026-03-28 06:22:05.124868 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124879 | orchestrator | 2026-03-28 06:22:05.124890 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:22:05.124901 | orchestrator | Saturday 28 March 2026 06:21:39 +0000 (0:00:00.778) 1:07:45.525 ******** 2026-03-28 06:22:05.124912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124923 | orchestrator | 2026-03-28 06:22:05.124934 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:22:05.124945 | orchestrator | Saturday 28 March 2026 06:21:39 +0000 (0:00:00.787) 1:07:46.313 ******** 2026-03-28 06:22:05.124956 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.124967 | orchestrator | 2026-03-28 06:22:05.124978 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:22:05.124989 | orchestrator | Saturday 28 March 2026 06:21:40 +0000 (0:00:00.779) 1:07:47.092 ******** 2026-03-28 06:22:05.125000 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125011 | orchestrator | 2026-03-28 06:22:05.125022 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:22:05.125033 | orchestrator | Saturday 28 March 2026 06:21:41 +0000 (0:00:00.759) 1:07:47.852 ******** 2026-03-28 06:22:05.125044 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125055 | orchestrator | 2026-03-28 06:22:05.125066 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:22:05.125077 | orchestrator | Saturday 28 March 2026 06:21:42 +0000 (0:00:00.866) 1:07:48.719 ******** 2026-03-28 06:22:05.125088 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125100 | orchestrator | 2026-03-28 06:22:05.125126 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:22:05.125138 | orchestrator | Saturday 28 March 2026 06:21:43 +0000 (0:00:00.793) 1:07:49.513 ******** 2026-03-28 06:22:05.125150 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125161 | orchestrator | 2026-03-28 06:22:05.125172 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:22:05.125183 | orchestrator | Saturday 28 March 2026 06:21:43 +0000 (0:00:00.795) 1:07:50.309 ******** 2026-03-28 06:22:05.125194 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125205 | orchestrator | 2026-03-28 06:22:05.125217 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:22:05.125228 | orchestrator | Saturday 28 March 2026 06:21:44 +0000 (0:00:00.785) 1:07:51.094 ******** 2026-03-28 06:22:05.125239 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125250 | orchestrator | 2026-03-28 06:22:05.125261 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:22:05.125272 | orchestrator | Saturday 28 March 2026 06:21:45 +0000 (0:00:00.818) 1:07:51.912 ******** 2026-03-28 06:22:05.125283 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.125294 | orchestrator | 2026-03-28 06:22:05.125305 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:22:05.125324 | orchestrator | Saturday 28 March 2026 06:21:47 +0000 (0:00:01.573) 1:07:53.486 ******** 2026-03-28 06:22:05.125336 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.125346 | orchestrator | 2026-03-28 06:22:05.125357 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:22:05.125369 | orchestrator | Saturday 28 March 2026 06:21:48 +0000 (0:00:01.873) 1:07:55.359 ******** 2026-03-28 06:22:05.125380 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-28 06:22:05.125391 | orchestrator | 2026-03-28 06:22:05.125403 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:22:05.125414 | orchestrator | Saturday 28 March 2026 06:21:50 +0000 (0:00:01.132) 1:07:56.492 ******** 2026-03-28 06:22:05.125425 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125436 | orchestrator | 2026-03-28 06:22:05.125447 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:22:05.125458 | orchestrator | Saturday 28 March 2026 06:21:51 +0000 (0:00:01.244) 1:07:57.737 ******** 2026-03-28 06:22:05.125469 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125480 | orchestrator | 2026-03-28 06:22:05.125491 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:22:05.125502 | orchestrator | Saturday 28 March 2026 06:21:52 +0000 (0:00:01.191) 1:07:58.928 ******** 2026-03-28 06:22:05.125514 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:22:05.125525 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:22:05.125536 | orchestrator | 2026-03-28 06:22:05.125547 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:22:05.125558 | orchestrator | Saturday 28 March 2026 06:21:54 +0000 (0:00:01.871) 1:08:00.800 ******** 2026-03-28 06:22:05.125569 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.125580 | orchestrator | 2026-03-28 06:22:05.125591 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:22:05.125602 | orchestrator | Saturday 28 March 2026 06:21:55 +0000 (0:00:01.495) 1:08:02.296 ******** 2026-03-28 06:22:05.125613 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125624 | orchestrator | 2026-03-28 06:22:05.125635 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:22:05.125646 | orchestrator | Saturday 28 March 2026 06:21:57 +0000 (0:00:01.207) 1:08:03.503 ******** 2026-03-28 06:22:05.125657 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125668 | orchestrator | 2026-03-28 06:22:05.125679 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:22:05.125690 | orchestrator | Saturday 28 March 2026 06:21:57 +0000 (0:00:00.784) 1:08:04.288 ******** 2026-03-28 06:22:05.125707 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125718 | orchestrator | 2026-03-28 06:22:05.125729 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:22:05.125740 | orchestrator | Saturday 28 March 2026 06:21:58 +0000 (0:00:00.783) 1:08:05.071 ******** 2026-03-28 06:22:05.125751 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-28 06:22:05.125762 | orchestrator | 2026-03-28 06:22:05.125790 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:22:05.125801 | orchestrator | Saturday 28 March 2026 06:21:59 +0000 (0:00:01.149) 1:08:06.221 ******** 2026-03-28 06:22:05.125812 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:05.125823 | orchestrator | 2026-03-28 06:22:05.125834 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:22:05.125845 | orchestrator | Saturday 28 March 2026 06:22:01 +0000 (0:00:01.757) 1:08:07.978 ******** 2026-03-28 06:22:05.125856 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:22:05.125867 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:22:05.125885 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:22:05.125896 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125907 | orchestrator | 2026-03-28 06:22:05.125918 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:22:05.125928 | orchestrator | Saturday 28 March 2026 06:22:02 +0000 (0:00:01.180) 1:08:09.158 ******** 2026-03-28 06:22:05.125939 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125950 | orchestrator | 2026-03-28 06:22:05.125961 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:22:05.125972 | orchestrator | Saturday 28 March 2026 06:22:03 +0000 (0:00:01.149) 1:08:10.308 ******** 2026-03-28 06:22:05.125983 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:05.125994 | orchestrator | 2026-03-28 06:22:05.126010 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:22:48.547308 | orchestrator | Saturday 28 March 2026 06:22:05 +0000 (0:00:01.238) 1:08:11.547 ******** 2026-03-28 06:22:48.547430 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547449 | orchestrator | 2026-03-28 06:22:48.547462 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:22:48.547474 | orchestrator | Saturday 28 March 2026 06:22:06 +0000 (0:00:01.162) 1:08:12.709 ******** 2026-03-28 06:22:48.547485 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547496 | orchestrator | 2026-03-28 06:22:48.547508 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:22:48.547519 | orchestrator | Saturday 28 March 2026 06:22:07 +0000 (0:00:01.170) 1:08:13.880 ******** 2026-03-28 06:22:48.547530 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547541 | orchestrator | 2026-03-28 06:22:48.547552 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:22:48.547564 | orchestrator | Saturday 28 March 2026 06:22:08 +0000 (0:00:00.832) 1:08:14.713 ******** 2026-03-28 06:22:48.547576 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:48.547588 | orchestrator | 2026-03-28 06:22:48.547599 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:22:48.547611 | orchestrator | Saturday 28 March 2026 06:22:10 +0000 (0:00:02.232) 1:08:16.946 ******** 2026-03-28 06:22:48.547622 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:48.547633 | orchestrator | 2026-03-28 06:22:48.547644 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:22:48.547655 | orchestrator | Saturday 28 March 2026 06:22:11 +0000 (0:00:00.837) 1:08:17.783 ******** 2026-03-28 06:22:48.547667 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-28 06:22:48.547678 | orchestrator | 2026-03-28 06:22:48.547689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:22:48.547700 | orchestrator | Saturday 28 March 2026 06:22:12 +0000 (0:00:01.164) 1:08:18.947 ******** 2026-03-28 06:22:48.547711 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547722 | orchestrator | 2026-03-28 06:22:48.547733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:22:48.547744 | orchestrator | Saturday 28 March 2026 06:22:13 +0000 (0:00:01.156) 1:08:20.104 ******** 2026-03-28 06:22:48.547755 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547832 | orchestrator | 2026-03-28 06:22:48.547848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:22:48.547860 | orchestrator | Saturday 28 March 2026 06:22:14 +0000 (0:00:01.174) 1:08:21.279 ******** 2026-03-28 06:22:48.547873 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547886 | orchestrator | 2026-03-28 06:22:48.547898 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:22:48.547911 | orchestrator | Saturday 28 March 2026 06:22:15 +0000 (0:00:01.148) 1:08:22.427 ******** 2026-03-28 06:22:48.547924 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.547939 | orchestrator | 2026-03-28 06:22:48.547989 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:22:48.548010 | orchestrator | Saturday 28 March 2026 06:22:17 +0000 (0:00:01.159) 1:08:23.587 ******** 2026-03-28 06:22:48.548027 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548045 | orchestrator | 2026-03-28 06:22:48.548062 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:22:48.548081 | orchestrator | Saturday 28 March 2026 06:22:18 +0000 (0:00:01.158) 1:08:24.745 ******** 2026-03-28 06:22:48.548098 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548118 | orchestrator | 2026-03-28 06:22:48.548136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:22:48.548156 | orchestrator | Saturday 28 March 2026 06:22:19 +0000 (0:00:01.153) 1:08:25.899 ******** 2026-03-28 06:22:48.548173 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548186 | orchestrator | 2026-03-28 06:22:48.548200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:22:48.548226 | orchestrator | Saturday 28 March 2026 06:22:20 +0000 (0:00:01.177) 1:08:27.077 ******** 2026-03-28 06:22:48.548237 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548248 | orchestrator | 2026-03-28 06:22:48.548259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:22:48.548270 | orchestrator | Saturday 28 March 2026 06:22:21 +0000 (0:00:01.148) 1:08:28.225 ******** 2026-03-28 06:22:48.548281 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:22:48.548292 | orchestrator | 2026-03-28 06:22:48.548303 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:22:48.548314 | orchestrator | Saturday 28 March 2026 06:22:22 +0000 (0:00:00.819) 1:08:29.044 ******** 2026-03-28 06:22:48.548324 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-28 06:22:48.548336 | orchestrator | 2026-03-28 06:22:48.548347 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:22:48.548359 | orchestrator | Saturday 28 March 2026 06:22:23 +0000 (0:00:01.287) 1:08:30.332 ******** 2026-03-28 06:22:48.548370 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 06:22:48.548381 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 06:22:48.548392 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 06:22:48.548403 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 06:22:48.548414 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 06:22:48.548424 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 06:22:48.548435 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 06:22:48.548446 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:22:48.548457 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:22:48.548468 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:22:48.548479 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:22:48.548508 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:22:48.548520 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:22:48.548531 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:22:48.548542 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 06:22:48.548553 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 06:22:48.548564 | orchestrator | 2026-03-28 06:22:48.548575 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:22:48.548586 | orchestrator | Saturday 28 March 2026 06:22:30 +0000 (0:00:06.214) 1:08:36.546 ******** 2026-03-28 06:22:48.548597 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-28 06:22:48.548608 | orchestrator | 2026-03-28 06:22:48.548618 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:22:48.548637 | orchestrator | Saturday 28 March 2026 06:22:31 +0000 (0:00:01.146) 1:08:37.693 ******** 2026-03-28 06:22:48.548648 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:22:48.548660 | orchestrator | 2026-03-28 06:22:48.548671 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:22:48.548682 | orchestrator | Saturday 28 March 2026 06:22:32 +0000 (0:00:01.537) 1:08:39.231 ******** 2026-03-28 06:22:48.548693 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:22:48.548704 | orchestrator | 2026-03-28 06:22:48.548714 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:22:48.548725 | orchestrator | Saturday 28 March 2026 06:22:34 +0000 (0:00:01.682) 1:08:40.913 ******** 2026-03-28 06:22:48.548736 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548747 | orchestrator | 2026-03-28 06:22:48.548757 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:22:48.548793 | orchestrator | Saturday 28 March 2026 06:22:35 +0000 (0:00:00.834) 1:08:41.748 ******** 2026-03-28 06:22:48.548805 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548815 | orchestrator | 2026-03-28 06:22:48.548826 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:22:48.548837 | orchestrator | Saturday 28 March 2026 06:22:36 +0000 (0:00:00.817) 1:08:42.565 ******** 2026-03-28 06:22:48.548848 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548858 | orchestrator | 2026-03-28 06:22:48.548869 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:22:48.548880 | orchestrator | Saturday 28 March 2026 06:22:36 +0000 (0:00:00.812) 1:08:43.378 ******** 2026-03-28 06:22:48.548890 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548901 | orchestrator | 2026-03-28 06:22:48.548912 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:22:48.548922 | orchestrator | Saturday 28 March 2026 06:22:37 +0000 (0:00:00.871) 1:08:44.250 ******** 2026-03-28 06:22:48.548933 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548944 | orchestrator | 2026-03-28 06:22:48.548954 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:22:48.548965 | orchestrator | Saturday 28 March 2026 06:22:38 +0000 (0:00:00.829) 1:08:45.079 ******** 2026-03-28 06:22:48.548976 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.548986 | orchestrator | 2026-03-28 06:22:48.548997 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:22:48.549008 | orchestrator | Saturday 28 March 2026 06:22:39 +0000 (0:00:00.781) 1:08:45.862 ******** 2026-03-28 06:22:48.549019 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.549030 | orchestrator | 2026-03-28 06:22:48.549046 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:22:48.549057 | orchestrator | Saturday 28 March 2026 06:22:40 +0000 (0:00:00.913) 1:08:46.775 ******** 2026-03-28 06:22:48.549068 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.549078 | orchestrator | 2026-03-28 06:22:48.549089 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:22:48.549100 | orchestrator | Saturday 28 March 2026 06:22:41 +0000 (0:00:00.834) 1:08:47.609 ******** 2026-03-28 06:22:48.549110 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.549121 | orchestrator | 2026-03-28 06:22:48.549131 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:22:48.549142 | orchestrator | Saturday 28 March 2026 06:22:41 +0000 (0:00:00.784) 1:08:48.393 ******** 2026-03-28 06:22:48.549153 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.549164 | orchestrator | 2026-03-28 06:22:48.549174 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:22:48.549192 | orchestrator | Saturday 28 March 2026 06:22:42 +0000 (0:00:00.795) 1:08:49.189 ******** 2026-03-28 06:22:48.549203 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:22:48.549214 | orchestrator | 2026-03-28 06:22:48.549224 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:22:48.549235 | orchestrator | Saturday 28 March 2026 06:22:43 +0000 (0:00:00.798) 1:08:49.987 ******** 2026-03-28 06:22:48.549246 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:22:48.549256 | orchestrator | 2026-03-28 06:22:48.549267 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:22:48.549278 | orchestrator | Saturday 28 March 2026 06:22:47 +0000 (0:00:04.154) 1:08:54.142 ******** 2026-03-28 06:22:48.549289 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:22:48.549300 | orchestrator | 2026-03-28 06:22:48.549317 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:23:30.237223 | orchestrator | Saturday 28 March 2026 06:22:48 +0000 (0:00:00.828) 1:08:54.971 ******** 2026-03-28 06:23:30.237343 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-28 06:23:30.237364 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-28 06:23:30.237378 | orchestrator | 2026-03-28 06:23:30.237391 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:23:30.237403 | orchestrator | Saturday 28 March 2026 06:22:53 +0000 (0:00:04.613) 1:08:59.585 ******** 2026-03-28 06:23:30.237414 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237426 | orchestrator | 2026-03-28 06:23:30.237438 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:23:30.237449 | orchestrator | Saturday 28 March 2026 06:22:53 +0000 (0:00:00.807) 1:09:00.392 ******** 2026-03-28 06:23:30.237460 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237471 | orchestrator | 2026-03-28 06:23:30.237482 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:23:30.237495 | orchestrator | Saturday 28 March 2026 06:22:54 +0000 (0:00:00.810) 1:09:01.205 ******** 2026-03-28 06:23:30.237506 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237517 | orchestrator | 2026-03-28 06:23:30.237528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:23:30.237539 | orchestrator | Saturday 28 March 2026 06:22:55 +0000 (0:00:00.826) 1:09:02.032 ******** 2026-03-28 06:23:30.237549 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237560 | orchestrator | 2026-03-28 06:23:30.237571 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:23:30.237582 | orchestrator | Saturday 28 March 2026 06:22:56 +0000 (0:00:00.823) 1:09:02.856 ******** 2026-03-28 06:23:30.237593 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237604 | orchestrator | 2026-03-28 06:23:30.237615 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:23:30.237627 | orchestrator | Saturday 28 March 2026 06:22:57 +0000 (0:00:00.840) 1:09:03.696 ******** 2026-03-28 06:23:30.237638 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:23:30.237649 | orchestrator | 2026-03-28 06:23:30.237661 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:23:30.237697 | orchestrator | Saturday 28 March 2026 06:22:58 +0000 (0:00:00.861) 1:09:04.558 ******** 2026-03-28 06:23:30.237709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:23:30.237721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:23:30.237732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:23:30.237743 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237754 | orchestrator | 2026-03-28 06:23:30.237795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:23:30.237810 | orchestrator | Saturday 28 March 2026 06:22:59 +0000 (0:00:01.566) 1:09:06.124 ******** 2026-03-28 06:23:30.237822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:23:30.237852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:23:30.237864 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:23:30.237877 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237889 | orchestrator | 2026-03-28 06:23:30.237902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:23:30.237915 | orchestrator | Saturday 28 March 2026 06:23:00 +0000 (0:00:01.124) 1:09:07.249 ******** 2026-03-28 06:23:30.237928 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 06:23:30.237941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 06:23:30.237953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 06:23:30.237966 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.237978 | orchestrator | 2026-03-28 06:23:30.237991 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:23:30.238004 | orchestrator | Saturday 28 March 2026 06:23:01 +0000 (0:00:01.106) 1:09:08.356 ******** 2026-03-28 06:23:30.238072 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:23:30.238086 | orchestrator | 2026-03-28 06:23:30.238099 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:23:30.238112 | orchestrator | Saturday 28 March 2026 06:23:02 +0000 (0:00:00.826) 1:09:09.182 ******** 2026-03-28 06:23:30.238123 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 06:23:30.238134 | orchestrator | 2026-03-28 06:23:30.238145 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:23:30.238155 | orchestrator | Saturday 28 March 2026 06:23:03 +0000 (0:00:01.039) 1:09:10.222 ******** 2026-03-28 06:23:30.238166 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:23:30.238177 | orchestrator | 2026-03-28 06:23:30.238188 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 06:23:30.238199 | orchestrator | Saturday 28 March 2026 06:23:05 +0000 (0:00:01.458) 1:09:11.681 ******** 2026-03-28 06:23:30.238210 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-28 06:23:30.238221 | orchestrator | 2026-03-28 06:23:30.238250 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:23:30.238262 | orchestrator | Saturday 28 March 2026 06:23:06 +0000 (0:00:01.110) 1:09:12.792 ******** 2026-03-28 06:23:30.238273 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:23:30.238284 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 06:23:30.238295 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:23:30.238306 | orchestrator | 2026-03-28 06:23:30.238317 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:23:30.238328 | orchestrator | Saturday 28 March 2026 06:23:09 +0000 (0:00:03.203) 1:09:15.996 ******** 2026-03-28 06:23:30.238339 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-28 06:23:30.238350 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 06:23:30.238361 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:23:30.238372 | orchestrator | 2026-03-28 06:23:30.238383 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 06:23:30.238404 | orchestrator | Saturday 28 March 2026 06:23:11 +0000 (0:00:01.996) 1:09:17.992 ******** 2026-03-28 06:23:30.238415 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.238426 | orchestrator | 2026-03-28 06:23:30.238437 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 06:23:30.238448 | orchestrator | Saturday 28 March 2026 06:23:12 +0000 (0:00:00.783) 1:09:18.775 ******** 2026-03-28 06:23:30.238459 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-28 06:23:30.238470 | orchestrator | 2026-03-28 06:23:30.238481 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 06:23:30.238492 | orchestrator | Saturday 28 March 2026 06:23:13 +0000 (0:00:01.284) 1:09:20.060 ******** 2026-03-28 06:23:30.238503 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:23:30.238516 | orchestrator | 2026-03-28 06:23:30.238527 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 06:23:30.238538 | orchestrator | Saturday 28 March 2026 06:23:15 +0000 (0:00:01.659) 1:09:21.719 ******** 2026-03-28 06:23:30.238549 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:23:30.238560 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 06:23:30.238571 | orchestrator | 2026-03-28 06:23:30.238582 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:23:30.238592 | orchestrator | Saturday 28 March 2026 06:23:20 +0000 (0:00:05.193) 1:09:26.912 ******** 2026-03-28 06:23:30.238603 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:23:30.238614 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:23:30.238625 | orchestrator | 2026-03-28 06:23:30.238636 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:23:30.238647 | orchestrator | Saturday 28 March 2026 06:23:23 +0000 (0:00:03.153) 1:09:30.066 ******** 2026-03-28 06:23:30.238670 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-28 06:23:30.238682 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:23:30.238693 | orchestrator | 2026-03-28 06:23:30.238704 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 06:23:30.238715 | orchestrator | Saturday 28 March 2026 06:23:25 +0000 (0:00:01.709) 1:09:31.776 ******** 2026-03-28 06:23:30.238725 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-28 06:23:30.238736 | orchestrator | 2026-03-28 06:23:30.238747 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 06:23:30.238789 | orchestrator | Saturday 28 March 2026 06:23:26 +0000 (0:00:01.173) 1:09:32.950 ******** 2026-03-28 06:23:30.238804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238860 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:23:30.238872 | orchestrator | 2026-03-28 06:23:30.238883 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 06:23:30.238894 | orchestrator | Saturday 28 March 2026 06:23:28 +0000 (0:00:01.691) 1:09:34.642 ******** 2026-03-28 06:23:30.238912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:23:30.238952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:24:38.438173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:24:38.438337 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:24:38.438353 | orchestrator | 2026-03-28 06:24:38.438363 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 06:24:38.438373 | orchestrator | Saturday 28 March 2026 06:23:30 +0000 (0:00:02.015) 1:09:36.657 ******** 2026-03-28 06:24:38.438381 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:24:38.438391 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:24:38.438399 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:24:38.438407 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:24:38.438415 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:24:38.438423 | orchestrator | 2026-03-28 06:24:38.438432 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 06:24:38.438440 | orchestrator | Saturday 28 March 2026 06:24:03 +0000 (0:00:32.945) 1:10:09.604 ******** 2026-03-28 06:24:38.438448 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:24:38.438456 | orchestrator | 2026-03-28 06:24:38.438464 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 06:24:38.438472 | orchestrator | Saturday 28 March 2026 06:24:03 +0000 (0:00:00.788) 1:10:10.392 ******** 2026-03-28 06:24:38.438480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:24:38.438488 | orchestrator | 2026-03-28 06:24:38.438496 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 06:24:38.438504 | orchestrator | Saturday 28 March 2026 06:24:04 +0000 (0:00:00.789) 1:10:11.181 ******** 2026-03-28 06:24:38.438512 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-28 06:24:38.438521 | orchestrator | 2026-03-28 06:24:38.438529 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 06:24:38.438537 | orchestrator | Saturday 28 March 2026 06:24:06 +0000 (0:00:01.271) 1:10:12.453 ******** 2026-03-28 06:24:38.438545 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-28 06:24:38.438552 | orchestrator | 2026-03-28 06:24:38.438560 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 06:24:38.438568 | orchestrator | Saturday 28 March 2026 06:24:07 +0000 (0:00:01.130) 1:10:13.583 ******** 2026-03-28 06:24:38.438576 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:24:38.438585 | orchestrator | 2026-03-28 06:24:38.438593 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 06:24:38.438601 | orchestrator | Saturday 28 March 2026 06:24:09 +0000 (0:00:02.054) 1:10:15.638 ******** 2026-03-28 06:24:38.438633 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:24:38.438642 | orchestrator | 2026-03-28 06:24:38.438650 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 06:24:38.438670 | orchestrator | Saturday 28 March 2026 06:24:11 +0000 (0:00:01.956) 1:10:17.594 ******** 2026-03-28 06:24:38.438678 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:24:38.438686 | orchestrator | 2026-03-28 06:24:38.438694 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 06:24:38.438702 | orchestrator | Saturday 28 March 2026 06:24:13 +0000 (0:00:02.214) 1:10:19.809 ******** 2026-03-28 06:24:38.438710 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 06:24:38.438718 | orchestrator | 2026-03-28 06:24:38.438726 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-28 06:24:38.438734 | orchestrator | 2026-03-28 06:24:38.438742 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:24:38.438809 | orchestrator | Saturday 28 March 2026 06:24:16 +0000 (0:00:02.792) 1:10:22.601 ******** 2026-03-28 06:24:38.438822 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-28 06:24:38.438829 | orchestrator | 2026-03-28 06:24:38.438837 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 06:24:38.438845 | orchestrator | Saturday 28 March 2026 06:24:17 +0000 (0:00:01.182) 1:10:23.784 ******** 2026-03-28 06:24:38.438853 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.438861 | orchestrator | 2026-03-28 06:24:38.438869 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 06:24:38.438877 | orchestrator | Saturday 28 March 2026 06:24:18 +0000 (0:00:01.487) 1:10:25.272 ******** 2026-03-28 06:24:38.438885 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.438892 | orchestrator | 2026-03-28 06:24:38.438900 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:24:38.438908 | orchestrator | Saturday 28 March 2026 06:24:20 +0000 (0:00:01.226) 1:10:26.498 ******** 2026-03-28 06:24:38.438916 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.438924 | orchestrator | 2026-03-28 06:24:38.438931 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:24:38.438939 | orchestrator | Saturday 28 March 2026 06:24:21 +0000 (0:00:01.457) 1:10:27.956 ******** 2026-03-28 06:24:38.438947 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.438955 | orchestrator | 2026-03-28 06:24:38.438979 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 06:24:38.438988 | orchestrator | Saturday 28 March 2026 06:24:22 +0000 (0:00:01.208) 1:10:29.165 ******** 2026-03-28 06:24:38.438996 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.439003 | orchestrator | 2026-03-28 06:24:38.439011 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 06:24:38.439019 | orchestrator | Saturday 28 March 2026 06:24:23 +0000 (0:00:01.133) 1:10:30.298 ******** 2026-03-28 06:24:38.439027 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.439035 | orchestrator | 2026-03-28 06:24:38.439043 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 06:24:38.439051 | orchestrator | Saturday 28 March 2026 06:24:25 +0000 (0:00:01.177) 1:10:31.475 ******** 2026-03-28 06:24:38.439059 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:38.439067 | orchestrator | 2026-03-28 06:24:38.439075 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 06:24:38.439083 | orchestrator | Saturday 28 March 2026 06:24:26 +0000 (0:00:01.169) 1:10:32.645 ******** 2026-03-28 06:24:38.439091 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.439099 | orchestrator | 2026-03-28 06:24:38.439107 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 06:24:38.439114 | orchestrator | Saturday 28 March 2026 06:24:27 +0000 (0:00:01.157) 1:10:33.803 ******** 2026-03-28 06:24:38.439122 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:24:38.439138 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:24:38.439146 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:24:38.439154 | orchestrator | 2026-03-28 06:24:38.439162 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 06:24:38.439170 | orchestrator | Saturday 28 March 2026 06:24:29 +0000 (0:00:01.759) 1:10:35.562 ******** 2026-03-28 06:24:38.439178 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:38.439186 | orchestrator | 2026-03-28 06:24:38.439193 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 06:24:38.439201 | orchestrator | Saturday 28 March 2026 06:24:30 +0000 (0:00:01.250) 1:10:36.813 ******** 2026-03-28 06:24:38.439209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:24:38.439217 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:24:38.439225 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:24:38.439233 | orchestrator | 2026-03-28 06:24:38.439241 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 06:24:38.439248 | orchestrator | Saturday 28 March 2026 06:24:33 +0000 (0:00:03.313) 1:10:40.127 ******** 2026-03-28 06:24:38.439256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 06:24:38.439265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 06:24:38.439273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 06:24:38.439281 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:38.439288 | orchestrator | 2026-03-28 06:24:38.439296 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 06:24:38.439304 | orchestrator | Saturday 28 March 2026 06:24:35 +0000 (0:00:01.462) 1:10:41.589 ******** 2026-03-28 06:24:38.439318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 06:24:38.439330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 06:24:38.439338 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 06:24:38.439346 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:38.439354 | orchestrator | 2026-03-28 06:24:38.439362 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 06:24:38.439370 | orchestrator | Saturday 28 March 2026 06:24:37 +0000 (0:00:02.091) 1:10:43.680 ******** 2026-03-28 06:24:38.439380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:24:38.439396 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:24:58.768373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 06:24:58.768488 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.768506 | orchestrator | 2026-03-28 06:24:58.768518 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 06:24:58.768531 | orchestrator | Saturday 28 March 2026 06:24:38 +0000 (0:00:01.179) 1:10:44.860 ******** 2026-03-28 06:24:58.768544 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f433dc8c1c44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 06:24:30.893927', 'end': '2026-03-28 06:24:30.938260', 'delta': '0:00:00.044333', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f433dc8c1c44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 06:24:58.768559 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6241569b775f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 06:24:31.456664', 'end': '2026-03-28 06:24:31.493274', 'delta': '0:00:00.036610', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6241569b775f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 06:24:58.768589 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '80376407089e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 06:24:32.424244', 'end': '2026-03-28 06:24:32.470174', 'delta': '0:00:00.045930', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80376407089e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 06:24:58.768602 | orchestrator | 2026-03-28 06:24:58.768613 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 06:24:58.768624 | orchestrator | Saturday 28 March 2026 06:24:39 +0000 (0:00:01.279) 1:10:46.140 ******** 2026-03-28 06:24:58.768635 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.768647 | orchestrator | 2026-03-28 06:24:58.768658 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 06:24:58.768670 | orchestrator | Saturday 28 March 2026 06:24:41 +0000 (0:00:01.298) 1:10:47.439 ******** 2026-03-28 06:24:58.768681 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.768692 | orchestrator | 2026-03-28 06:24:58.768703 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 06:24:58.768714 | orchestrator | Saturday 28 March 2026 06:24:42 +0000 (0:00:01.271) 1:10:48.710 ******** 2026-03-28 06:24:58.768725 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.768786 | orchestrator | 2026-03-28 06:24:58.768800 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 06:24:58.768811 | orchestrator | Saturday 28 March 2026 06:24:43 +0000 (0:00:01.120) 1:10:49.830 ******** 2026-03-28 06:24:58.768822 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 06:24:58.768833 | orchestrator | 2026-03-28 06:24:58.768844 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:24:58.768855 | orchestrator | Saturday 28 March 2026 06:24:46 +0000 (0:00:03.018) 1:10:52.849 ******** 2026-03-28 06:24:58.768866 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.768876 | orchestrator | 2026-03-28 06:24:58.768888 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 06:24:58.768901 | orchestrator | Saturday 28 March 2026 06:24:47 +0000 (0:00:01.183) 1:10:54.032 ******** 2026-03-28 06:24:58.768932 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.768945 | orchestrator | 2026-03-28 06:24:58.768958 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 06:24:58.768970 | orchestrator | Saturday 28 March 2026 06:24:48 +0000 (0:00:01.174) 1:10:55.206 ******** 2026-03-28 06:24:58.768981 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.768991 | orchestrator | 2026-03-28 06:24:58.769002 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 06:24:58.769013 | orchestrator | Saturday 28 March 2026 06:24:50 +0000 (0:00:01.268) 1:10:56.475 ******** 2026-03-28 06:24:58.769023 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.769034 | orchestrator | 2026-03-28 06:24:58.769045 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 06:24:58.769056 | orchestrator | Saturday 28 March 2026 06:24:51 +0000 (0:00:01.204) 1:10:57.680 ******** 2026-03-28 06:24:58.769066 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.769077 | orchestrator | 2026-03-28 06:24:58.769088 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 06:24:58.769099 | orchestrator | Saturday 28 March 2026 06:24:52 +0000 (0:00:01.153) 1:10:58.833 ******** 2026-03-28 06:24:58.769110 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.769121 | orchestrator | 2026-03-28 06:24:58.769132 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 06:24:58.769142 | orchestrator | Saturday 28 March 2026 06:24:53 +0000 (0:00:01.173) 1:11:00.006 ******** 2026-03-28 06:24:58.769153 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.769164 | orchestrator | 2026-03-28 06:24:58.769175 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 06:24:58.769186 | orchestrator | Saturday 28 March 2026 06:24:54 +0000 (0:00:01.149) 1:11:01.156 ******** 2026-03-28 06:24:58.769196 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.769207 | orchestrator | 2026-03-28 06:24:58.769218 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 06:24:58.769229 | orchestrator | Saturday 28 March 2026 06:24:56 +0000 (0:00:01.314) 1:11:02.470 ******** 2026-03-28 06:24:58.769239 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:24:58.769250 | orchestrator | 2026-03-28 06:24:58.769261 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 06:24:58.769273 | orchestrator | Saturday 28 March 2026 06:24:57 +0000 (0:00:01.264) 1:11:03.735 ******** 2026-03-28 06:24:58.769284 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:24:58.769294 | orchestrator | 2026-03-28 06:24:58.769305 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 06:24:58.769316 | orchestrator | Saturday 28 March 2026 06:24:58 +0000 (0:00:01.183) 1:11:04.919 ******** 2026-03-28 06:24:58.769327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:58.769354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}})  2026-03-28 06:24:58.769368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:24:58.769390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}})  2026-03-28 06:24:59.888025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-28 06:24:59.888197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}})  2026-03-28 06:24:59.888343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}})  2026-03-28 06:24:59.888356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-28 06:24:59.888402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-28 06:24:59.888434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-28 06:25:00.107310 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:00.107425 | orchestrator | 2026-03-28 06:25:00.107448 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 06:25:00.107470 | orchestrator | Saturday 28 March 2026 06:24:59 +0000 (0:00:01.397) 1:11:06.317 ******** 2026-03-28 06:25:00.107493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29', 'dm-uuid-LVM-uDZzMa1NuYxzqfjmSyEeKMGiSP14PIpxfQmkIicJobSweM1e3Xu4mrhLey7ZgTkz'], 'uuids': ['ffef7392-1bf0-40a9-b954-6528fa9d3d1b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b', 'scsi-SQEMU_QEMU_HARDDISK_a87118b5-ab65-41bd-8772-e2933164117b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a87118b5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0qLhmB-BF6t-8Szh-QZh7-WSVN-6n8Z-EdIGNA', 'scsi-0QEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e', 'scsi-SQEMU_QEMU_HARDDISK_85f5c7a4-97d3-420d-8739-a84ebbe15f9e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-28-01-42-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV', 'dm-uuid-CRYPT-LUKS2-92132eafae404a728980d6511c996c59-B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107810 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:00.107835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--988a6493--5e43--51ae--8e8a--a4936b4cd9b5-osd--block--988a6493--5e43--51ae--8e8a--a4936b4cd9b5', 'dm-uuid-LVM-MLuLSxacDE58F60yI8JhAuDtWaaLmCArB1DyQTAOEkimZh4T5FPndbpRBr3TpPcV'], 'uuids': ['92132eaf-ae40-4a72-8980-d6511c996c59'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '85f5c7a4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B1DyQT-AOEk-imZh-4T5F-Pndb-pRBr-3TpPcV']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-OXDPV4-O5Tw-9AiU-V5CD-TG9S-Byst-iW5ZWl', 'scsi-0QEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8', 'scsi-SQEMU_QEMU_HARDDISK_1464ef4d-7de4-47e1-81b9-b7b5db3a3de8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1464ef4d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e38c52ab--9b1d--5b26--b141--c51106128b29-osd--block--e38c52ab--9b1d--5b26--b141--c51106128b29']}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '913ffec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1', 'scsi-SQEMU_QEMU_HARDDISK_913ffec0-7e23-4596-ab58-7f688cd8a74f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426612 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426625 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz', 'dm-uuid-CRYPT-LUKS2-ffef73921bf040a9b9546528fa9d3d1b-fQmkIi-cJob-SweM-1e3X-u4mr-hLey-7ZgTkz'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-28 06:25:13.426638 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:13.426651 | orchestrator | 2026-03-28 06:25:13.426669 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 06:25:13.426682 | orchestrator | Saturday 28 March 2026 06:25:01 +0000 (0:00:01.402) 1:11:07.719 ******** 2026-03-28 06:25:13.426693 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:13.426705 | orchestrator | 2026-03-28 06:25:13.426716 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 06:25:13.426727 | orchestrator | Saturday 28 March 2026 06:25:02 +0000 (0:00:01.544) 1:11:09.264 ******** 2026-03-28 06:25:13.426738 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:13.426832 | orchestrator | 2026-03-28 06:25:13.426846 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:25:13.426857 | orchestrator | Saturday 28 March 2026 06:25:03 +0000 (0:00:01.158) 1:11:10.422 ******** 2026-03-28 06:25:13.426869 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:13.426880 | orchestrator | 2026-03-28 06:25:13.426891 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:25:13.426904 | orchestrator | Saturday 28 March 2026 06:25:05 +0000 (0:00:01.545) 1:11:11.968 ******** 2026-03-28 06:25:13.426916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:13.426930 | orchestrator | 2026-03-28 06:25:13.426942 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 06:25:13.426955 | orchestrator | Saturday 28 March 2026 06:25:06 +0000 (0:00:01.115) 1:11:13.084 ******** 2026-03-28 06:25:13.426968 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:13.426980 | orchestrator | 2026-03-28 06:25:13.426992 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 06:25:13.427006 | orchestrator | Saturday 28 March 2026 06:25:07 +0000 (0:00:01.232) 1:11:14.316 ******** 2026-03-28 06:25:13.427018 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:13.427031 | orchestrator | 2026-03-28 06:25:13.427044 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 06:25:13.427057 | orchestrator | Saturday 28 March 2026 06:25:09 +0000 (0:00:01.143) 1:11:15.460 ******** 2026-03-28 06:25:13.427070 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 06:25:13.427083 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 06:25:13.427096 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 06:25:13.427108 | orchestrator | 2026-03-28 06:25:13.427121 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 06:25:13.427134 | orchestrator | Saturday 28 March 2026 06:25:10 +0000 (0:00:01.871) 1:11:17.331 ******** 2026-03-28 06:25:13.427159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 06:25:13.427172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 06:25:13.427183 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 06:25:13.427194 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:13.427205 | orchestrator | 2026-03-28 06:25:13.427216 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 06:25:13.427227 | orchestrator | Saturday 28 March 2026 06:25:12 +0000 (0:00:01.201) 1:11:18.533 ******** 2026-03-28 06:25:13.427238 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-28 06:25:13.427250 | orchestrator | 2026-03-28 06:25:13.427269 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:25:56.338130 | orchestrator | Saturday 28 March 2026 06:25:13 +0000 (0:00:01.314) 1:11:19.848 ******** 2026-03-28 06:25:56.338246 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338263 | orchestrator | 2026-03-28 06:25:56.338276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:25:56.338288 | orchestrator | Saturday 28 March 2026 06:25:14 +0000 (0:00:01.166) 1:11:21.014 ******** 2026-03-28 06:25:56.338300 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338319 | orchestrator | 2026-03-28 06:25:56.338338 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:25:56.338358 | orchestrator | Saturday 28 March 2026 06:25:15 +0000 (0:00:01.162) 1:11:22.176 ******** 2026-03-28 06:25:56.338377 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338396 | orchestrator | 2026-03-28 06:25:56.338416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:25:56.338435 | orchestrator | Saturday 28 March 2026 06:25:16 +0000 (0:00:01.142) 1:11:23.319 ******** 2026-03-28 06:25:56.338447 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.338460 | orchestrator | 2026-03-28 06:25:56.338471 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:25:56.338482 | orchestrator | Saturday 28 March 2026 06:25:18 +0000 (0:00:01.266) 1:11:24.585 ******** 2026-03-28 06:25:56.338494 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:25:56.338506 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:25:56.338517 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:25:56.338529 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338540 | orchestrator | 2026-03-28 06:25:56.338551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:25:56.338562 | orchestrator | Saturday 28 March 2026 06:25:19 +0000 (0:00:01.451) 1:11:26.037 ******** 2026-03-28 06:25:56.338573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:25:56.338585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:25:56.338596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:25:56.338609 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338622 | orchestrator | 2026-03-28 06:25:56.338634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:25:56.338647 | orchestrator | Saturday 28 March 2026 06:25:21 +0000 (0:00:01.460) 1:11:27.497 ******** 2026-03-28 06:25:56.338663 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:25:56.338682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:25:56.338726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:25:56.338777 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.338792 | orchestrator | 2026-03-28 06:25:56.338805 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:25:56.338818 | orchestrator | Saturday 28 March 2026 06:25:22 +0000 (0:00:01.409) 1:11:28.907 ******** 2026-03-28 06:25:56.338854 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.338867 | orchestrator | 2026-03-28 06:25:56.338879 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:25:56.338892 | orchestrator | Saturday 28 March 2026 06:25:23 +0000 (0:00:01.174) 1:11:30.081 ******** 2026-03-28 06:25:56.338905 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:25:56.338917 | orchestrator | 2026-03-28 06:25:56.338930 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 06:25:56.338942 | orchestrator | Saturday 28 March 2026 06:25:25 +0000 (0:00:01.478) 1:11:31.560 ******** 2026-03-28 06:25:56.338955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:25:56.338969 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:25:56.338981 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:25:56.338992 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:25:56.339003 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:25:56.339013 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:25:56.339025 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:25:56.339035 | orchestrator | 2026-03-28 06:25:56.339046 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 06:25:56.339057 | orchestrator | Saturday 28 March 2026 06:25:27 +0000 (0:00:02.287) 1:11:33.847 ******** 2026-03-28 06:25:56.339068 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 06:25:56.339079 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 06:25:56.339090 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 06:25:56.339100 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-28 06:25:56.339111 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 06:25:56.339122 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-28 06:25:56.339133 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 06:25:56.339144 | orchestrator | 2026-03-28 06:25:56.339154 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-28 06:25:56.339165 | orchestrator | Saturday 28 March 2026 06:25:29 +0000 (0:00:02.391) 1:11:36.239 ******** 2026-03-28 06:25:56.339177 | orchestrator | changed: [testbed-node-5] 2026-03-28 06:25:56.339188 | orchestrator | 2026-03-28 06:25:56.339218 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-28 06:25:56.339230 | orchestrator | Saturday 28 March 2026 06:25:31 +0000 (0:00:02.031) 1:11:38.270 ******** 2026-03-28 06:25:56.339241 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:25:56.339253 | orchestrator | 2026-03-28 06:25:56.339264 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-28 06:25:56.339275 | orchestrator | Saturday 28 March 2026 06:25:34 +0000 (0:00:02.439) 1:11:40.709 ******** 2026-03-28 06:25:56.339286 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:25:56.339297 | orchestrator | 2026-03-28 06:25:56.339308 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:25:56.339319 | orchestrator | Saturday 28 March 2026 06:25:36 +0000 (0:00:01.948) 1:11:42.658 ******** 2026-03-28 06:25:56.339329 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-28 06:25:56.339340 | orchestrator | 2026-03-28 06:25:56.339351 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:25:56.339370 | orchestrator | Saturday 28 March 2026 06:25:37 +0000 (0:00:01.222) 1:11:43.880 ******** 2026-03-28 06:25:56.339381 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-28 06:25:56.339392 | orchestrator | 2026-03-28 06:25:56.339403 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:25:56.339414 | orchestrator | Saturday 28 March 2026 06:25:38 +0000 (0:00:01.177) 1:11:45.058 ******** 2026-03-28 06:25:56.339424 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339435 | orchestrator | 2026-03-28 06:25:56.339446 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:25:56.339457 | orchestrator | Saturday 28 March 2026 06:25:39 +0000 (0:00:01.126) 1:11:46.185 ******** 2026-03-28 06:25:56.339468 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339478 | orchestrator | 2026-03-28 06:25:56.339489 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:25:56.339500 | orchestrator | Saturday 28 March 2026 06:25:41 +0000 (0:00:01.631) 1:11:47.816 ******** 2026-03-28 06:25:56.339511 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339522 | orchestrator | 2026-03-28 06:25:56.339532 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:25:56.339543 | orchestrator | Saturday 28 March 2026 06:25:42 +0000 (0:00:01.563) 1:11:49.380 ******** 2026-03-28 06:25:56.339560 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339571 | orchestrator | 2026-03-28 06:25:56.339582 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:25:56.339593 | orchestrator | Saturday 28 March 2026 06:25:44 +0000 (0:00:01.540) 1:11:50.921 ******** 2026-03-28 06:25:56.339603 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339614 | orchestrator | 2026-03-28 06:25:56.339625 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:25:56.339636 | orchestrator | Saturday 28 March 2026 06:25:45 +0000 (0:00:01.184) 1:11:52.105 ******** 2026-03-28 06:25:56.339646 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339657 | orchestrator | 2026-03-28 06:25:56.339668 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:25:56.339679 | orchestrator | Saturday 28 March 2026 06:25:46 +0000 (0:00:01.120) 1:11:53.226 ******** 2026-03-28 06:25:56.339689 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339700 | orchestrator | 2026-03-28 06:25:56.339711 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:25:56.339722 | orchestrator | Saturday 28 March 2026 06:25:47 +0000 (0:00:01.131) 1:11:54.358 ******** 2026-03-28 06:25:56.339732 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339774 | orchestrator | 2026-03-28 06:25:56.339795 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:25:56.339813 | orchestrator | Saturday 28 March 2026 06:25:49 +0000 (0:00:02.001) 1:11:56.359 ******** 2026-03-28 06:25:56.339831 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339842 | orchestrator | 2026-03-28 06:25:56.339853 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:25:56.339864 | orchestrator | Saturday 28 March 2026 06:25:51 +0000 (0:00:01.543) 1:11:57.903 ******** 2026-03-28 06:25:56.339875 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339886 | orchestrator | 2026-03-28 06:25:56.339897 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:25:56.339908 | orchestrator | Saturday 28 March 2026 06:25:52 +0000 (0:00:00.815) 1:11:58.718 ******** 2026-03-28 06:25:56.339919 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.339929 | orchestrator | 2026-03-28 06:25:56.339940 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:25:56.339951 | orchestrator | Saturday 28 March 2026 06:25:53 +0000 (0:00:00.782) 1:11:59.501 ******** 2026-03-28 06:25:56.339968 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.339998 | orchestrator | 2026-03-28 06:25:56.340016 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:25:56.340032 | orchestrator | Saturday 28 March 2026 06:25:53 +0000 (0:00:00.829) 1:12:00.331 ******** 2026-03-28 06:25:56.340043 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.340054 | orchestrator | 2026-03-28 06:25:56.340065 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:25:56.340075 | orchestrator | Saturday 28 March 2026 06:25:54 +0000 (0:00:00.839) 1:12:01.170 ******** 2026-03-28 06:25:56.340086 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:25:56.340097 | orchestrator | 2026-03-28 06:25:56.340108 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:25:56.340118 | orchestrator | Saturday 28 March 2026 06:25:55 +0000 (0:00:00.797) 1:12:01.968 ******** 2026-03-28 06:25:56.340129 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:25:56.340140 | orchestrator | 2026-03-28 06:25:56.340159 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:26:37.203911 | orchestrator | Saturday 28 March 2026 06:25:56 +0000 (0:00:00.789) 1:12:02.758 ******** 2026-03-28 06:26:37.204032 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204048 | orchestrator | 2026-03-28 06:26:37.204061 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:26:37.204073 | orchestrator | Saturday 28 March 2026 06:25:57 +0000 (0:00:00.766) 1:12:03.525 ******** 2026-03-28 06:26:37.204085 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204097 | orchestrator | 2026-03-28 06:26:37.204108 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:26:37.204119 | orchestrator | Saturday 28 March 2026 06:25:57 +0000 (0:00:00.796) 1:12:04.321 ******** 2026-03-28 06:26:37.204130 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.204142 | orchestrator | 2026-03-28 06:26:37.204154 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:26:37.204165 | orchestrator | Saturday 28 March 2026 06:25:58 +0000 (0:00:00.814) 1:12:05.136 ******** 2026-03-28 06:26:37.204176 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.204187 | orchestrator | 2026-03-28 06:26:37.204198 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-28 06:26:37.204209 | orchestrator | Saturday 28 March 2026 06:25:59 +0000 (0:00:00.816) 1:12:05.952 ******** 2026-03-28 06:26:37.204220 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204231 | orchestrator | 2026-03-28 06:26:37.204242 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-28 06:26:37.204253 | orchestrator | Saturday 28 March 2026 06:26:00 +0000 (0:00:00.895) 1:12:06.848 ******** 2026-03-28 06:26:37.204264 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204275 | orchestrator | 2026-03-28 06:26:37.204286 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-28 06:26:37.204297 | orchestrator | Saturday 28 March 2026 06:26:01 +0000 (0:00:00.795) 1:12:07.643 ******** 2026-03-28 06:26:37.204309 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204319 | orchestrator | 2026-03-28 06:26:37.204331 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-28 06:26:37.204342 | orchestrator | Saturday 28 March 2026 06:26:01 +0000 (0:00:00.761) 1:12:08.405 ******** 2026-03-28 06:26:37.204353 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204364 | orchestrator | 2026-03-28 06:26:37.204375 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-28 06:26:37.204386 | orchestrator | Saturday 28 March 2026 06:26:02 +0000 (0:00:00.841) 1:12:09.247 ******** 2026-03-28 06:26:37.204397 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204408 | orchestrator | 2026-03-28 06:26:37.204419 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-28 06:26:37.204445 | orchestrator | Saturday 28 March 2026 06:26:03 +0000 (0:00:00.772) 1:12:10.020 ******** 2026-03-28 06:26:37.204457 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204490 | orchestrator | 2026-03-28 06:26:37.204502 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-28 06:26:37.204513 | orchestrator | Saturday 28 March 2026 06:26:04 +0000 (0:00:00.816) 1:12:10.837 ******** 2026-03-28 06:26:37.204524 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204535 | orchestrator | 2026-03-28 06:26:37.204546 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-28 06:26:37.204558 | orchestrator | Saturday 28 March 2026 06:26:05 +0000 (0:00:00.818) 1:12:11.656 ******** 2026-03-28 06:26:37.204569 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204580 | orchestrator | 2026-03-28 06:26:37.204590 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-28 06:26:37.204601 | orchestrator | Saturday 28 March 2026 06:26:06 +0000 (0:00:00.855) 1:12:12.511 ******** 2026-03-28 06:26:37.204612 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204623 | orchestrator | 2026-03-28 06:26:37.204634 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-28 06:26:37.204645 | orchestrator | Saturday 28 March 2026 06:26:06 +0000 (0:00:00.815) 1:12:13.326 ******** 2026-03-28 06:26:37.204655 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204666 | orchestrator | 2026-03-28 06:26:37.204677 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-28 06:26:37.204688 | orchestrator | Saturday 28 March 2026 06:26:07 +0000 (0:00:00.778) 1:12:14.105 ******** 2026-03-28 06:26:37.204699 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204710 | orchestrator | 2026-03-28 06:26:37.204721 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-28 06:26:37.204732 | orchestrator | Saturday 28 March 2026 06:26:08 +0000 (0:00:00.779) 1:12:14.885 ******** 2026-03-28 06:26:37.204743 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204753 | orchestrator | 2026-03-28 06:26:37.204764 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 06:26:37.204802 | orchestrator | Saturday 28 March 2026 06:26:09 +0000 (0:00:00.758) 1:12:15.644 ******** 2026-03-28 06:26:37.204813 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.204824 | orchestrator | 2026-03-28 06:26:37.204835 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 06:26:37.204846 | orchestrator | Saturday 28 March 2026 06:26:10 +0000 (0:00:01.671) 1:12:17.315 ******** 2026-03-28 06:26:37.204857 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.204868 | orchestrator | 2026-03-28 06:26:37.204879 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 06:26:37.204890 | orchestrator | Saturday 28 March 2026 06:26:12 +0000 (0:00:01.896) 1:12:19.211 ******** 2026-03-28 06:26:37.204900 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-28 06:26:37.204912 | orchestrator | 2026-03-28 06:26:37.204923 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 06:26:37.204934 | orchestrator | Saturday 28 March 2026 06:26:13 +0000 (0:00:01.125) 1:12:20.337 ******** 2026-03-28 06:26:37.204945 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.204956 | orchestrator | 2026-03-28 06:26:37.204967 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 06:26:37.204994 | orchestrator | Saturday 28 March 2026 06:26:15 +0000 (0:00:01.170) 1:12:21.507 ******** 2026-03-28 06:26:37.205006 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205017 | orchestrator | 2026-03-28 06:26:37.205028 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 06:26:37.205039 | orchestrator | Saturday 28 March 2026 06:26:16 +0000 (0:00:01.178) 1:12:22.686 ******** 2026-03-28 06:26:37.205050 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 06:26:37.205061 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 06:26:37.205072 | orchestrator | 2026-03-28 06:26:37.205083 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 06:26:37.205103 | orchestrator | Saturday 28 March 2026 06:26:18 +0000 (0:00:01.916) 1:12:24.603 ******** 2026-03-28 06:26:37.205114 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.205126 | orchestrator | 2026-03-28 06:26:37.205137 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 06:26:37.205148 | orchestrator | Saturday 28 March 2026 06:26:19 +0000 (0:00:01.445) 1:12:26.049 ******** 2026-03-28 06:26:37.205158 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205169 | orchestrator | 2026-03-28 06:26:37.205180 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 06:26:37.205191 | orchestrator | Saturday 28 March 2026 06:26:20 +0000 (0:00:01.204) 1:12:27.253 ******** 2026-03-28 06:26:37.205202 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205213 | orchestrator | 2026-03-28 06:26:37.205224 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 06:26:37.205235 | orchestrator | Saturday 28 March 2026 06:26:21 +0000 (0:00:00.842) 1:12:28.096 ******** 2026-03-28 06:26:37.205245 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205256 | orchestrator | 2026-03-28 06:26:37.205267 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 06:26:37.205278 | orchestrator | Saturday 28 March 2026 06:26:22 +0000 (0:00:00.788) 1:12:28.884 ******** 2026-03-28 06:26:37.205289 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-28 06:26:37.205300 | orchestrator | 2026-03-28 06:26:37.205311 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 06:26:37.205322 | orchestrator | Saturday 28 March 2026 06:26:23 +0000 (0:00:01.121) 1:12:30.006 ******** 2026-03-28 06:26:37.205333 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.205343 | orchestrator | 2026-03-28 06:26:37.205354 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 06:26:37.205371 | orchestrator | Saturday 28 March 2026 06:26:25 +0000 (0:00:01.802) 1:12:31.808 ******** 2026-03-28 06:26:37.205382 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 06:26:37.205393 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 06:26:37.205404 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 06:26:37.205415 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205426 | orchestrator | 2026-03-28 06:26:37.205437 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 06:26:37.205448 | orchestrator | Saturday 28 March 2026 06:26:26 +0000 (0:00:01.170) 1:12:32.979 ******** 2026-03-28 06:26:37.205459 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205470 | orchestrator | 2026-03-28 06:26:37.205481 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 06:26:37.205492 | orchestrator | Saturday 28 March 2026 06:26:27 +0000 (0:00:01.118) 1:12:34.097 ******** 2026-03-28 06:26:37.205503 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205514 | orchestrator | 2026-03-28 06:26:37.205524 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 06:26:37.205535 | orchestrator | Saturday 28 March 2026 06:26:28 +0000 (0:00:01.198) 1:12:35.295 ******** 2026-03-28 06:26:37.205546 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205557 | orchestrator | 2026-03-28 06:26:37.205568 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 06:26:37.205579 | orchestrator | Saturday 28 March 2026 06:26:30 +0000 (0:00:01.174) 1:12:36.470 ******** 2026-03-28 06:26:37.205590 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205601 | orchestrator | 2026-03-28 06:26:37.205612 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 06:26:37.205623 | orchestrator | Saturday 28 March 2026 06:26:31 +0000 (0:00:01.160) 1:12:37.630 ******** 2026-03-28 06:26:37.205634 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205651 | orchestrator | 2026-03-28 06:26:37.205662 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 06:26:37.205673 | orchestrator | Saturday 28 March 2026 06:26:32 +0000 (0:00:00.855) 1:12:38.486 ******** 2026-03-28 06:26:37.205684 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.205695 | orchestrator | 2026-03-28 06:26:37.205706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 06:26:37.205717 | orchestrator | Saturday 28 March 2026 06:26:34 +0000 (0:00:02.097) 1:12:40.583 ******** 2026-03-28 06:26:37.205728 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:26:37.205739 | orchestrator | 2026-03-28 06:26:37.205750 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 06:26:37.205761 | orchestrator | Saturday 28 March 2026 06:26:34 +0000 (0:00:00.783) 1:12:41.366 ******** 2026-03-28 06:26:37.205791 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-28 06:26:37.205802 | orchestrator | 2026-03-28 06:26:37.205813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 06:26:37.205824 | orchestrator | Saturday 28 March 2026 06:26:36 +0000 (0:00:01.113) 1:12:42.480 ******** 2026-03-28 06:26:37.205835 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:26:37.205846 | orchestrator | 2026-03-28 06:26:37.205858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 06:26:37.205875 | orchestrator | Saturday 28 March 2026 06:26:37 +0000 (0:00:01.144) 1:12:43.625 ******** 2026-03-28 06:27:18.892585 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.892733 | orchestrator | 2026-03-28 06:27:18.892758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 06:27:18.892780 | orchestrator | Saturday 28 March 2026 06:26:38 +0000 (0:00:01.167) 1:12:44.793 ******** 2026-03-28 06:27:18.892799 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.892818 | orchestrator | 2026-03-28 06:27:18.892838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 06:27:18.892858 | orchestrator | Saturday 28 March 2026 06:26:39 +0000 (0:00:01.150) 1:12:45.943 ******** 2026-03-28 06:27:18.892877 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.892895 | orchestrator | 2026-03-28 06:27:18.892975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 06:27:18.893037 | orchestrator | Saturday 28 March 2026 06:26:40 +0000 (0:00:01.222) 1:12:47.166 ******** 2026-03-28 06:27:18.893058 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.893078 | orchestrator | 2026-03-28 06:27:18.893132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 06:27:18.893156 | orchestrator | Saturday 28 March 2026 06:26:41 +0000 (0:00:01.233) 1:12:48.399 ******** 2026-03-28 06:27:18.893180 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.893202 | orchestrator | 2026-03-28 06:27:18.893224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 06:27:18.893244 | orchestrator | Saturday 28 March 2026 06:26:43 +0000 (0:00:01.153) 1:12:49.552 ******** 2026-03-28 06:27:18.893264 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.893284 | orchestrator | 2026-03-28 06:27:18.893304 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 06:27:18.893324 | orchestrator | Saturday 28 March 2026 06:26:44 +0000 (0:00:01.177) 1:12:50.729 ******** 2026-03-28 06:27:18.893345 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.893366 | orchestrator | 2026-03-28 06:27:18.893387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 06:27:18.893407 | orchestrator | Saturday 28 March 2026 06:26:45 +0000 (0:00:01.186) 1:12:51.916 ******** 2026-03-28 06:27:18.893428 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:27:18.893449 | orchestrator | 2026-03-28 06:27:18.893467 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 06:27:18.893486 | orchestrator | Saturday 28 March 2026 06:26:46 +0000 (0:00:00.835) 1:12:52.751 ******** 2026-03-28 06:27:18.893544 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-28 06:27:18.893567 | orchestrator | 2026-03-28 06:27:18.893603 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 06:27:18.893621 | orchestrator | Saturday 28 March 2026 06:26:47 +0000 (0:00:01.187) 1:12:53.938 ******** 2026-03-28 06:27:18.893641 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 06:27:18.893661 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 06:27:18.893679 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 06:27:18.893691 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 06:27:18.893702 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 06:27:18.893713 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 06:27:18.893724 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 06:27:18.893735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 06:27:18.893747 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 06:27:18.893758 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 06:27:18.893768 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 06:27:18.893779 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 06:27:18.893790 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 06:27:18.893801 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 06:27:18.893812 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 06:27:18.893823 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 06:27:18.893834 | orchestrator | 2026-03-28 06:27:18.893848 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 06:27:18.893867 | orchestrator | Saturday 28 March 2026 06:26:53 +0000 (0:00:06.168) 1:13:00.107 ******** 2026-03-28 06:27:18.893880 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-28 06:27:18.893891 | orchestrator | 2026-03-28 06:27:18.893902 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 06:27:18.893960 | orchestrator | Saturday 28 March 2026 06:26:54 +0000 (0:00:01.154) 1:13:01.262 ******** 2026-03-28 06:27:18.893974 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:27:18.893986 | orchestrator | 2026-03-28 06:27:18.894000 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 06:27:18.894080 | orchestrator | Saturday 28 March 2026 06:26:56 +0000 (0:00:01.549) 1:13:02.811 ******** 2026-03-28 06:27:18.894093 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:27:18.894104 | orchestrator | 2026-03-28 06:27:18.894115 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 06:27:18.894126 | orchestrator | Saturday 28 March 2026 06:26:58 +0000 (0:00:01.676) 1:13:04.488 ******** 2026-03-28 06:27:18.894136 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894159 | orchestrator | 2026-03-28 06:27:18.894170 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 06:27:18.894211 | orchestrator | Saturday 28 March 2026 06:26:58 +0000 (0:00:00.822) 1:13:05.311 ******** 2026-03-28 06:27:18.894225 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894236 | orchestrator | 2026-03-28 06:27:18.894247 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 06:27:18.894258 | orchestrator | Saturday 28 March 2026 06:26:59 +0000 (0:00:00.862) 1:13:06.173 ******** 2026-03-28 06:27:18.894269 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894279 | orchestrator | 2026-03-28 06:27:18.894305 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 06:27:18.894316 | orchestrator | Saturday 28 March 2026 06:27:00 +0000 (0:00:00.772) 1:13:06.946 ******** 2026-03-28 06:27:18.894327 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894338 | orchestrator | 2026-03-28 06:27:18.894349 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 06:27:18.894360 | orchestrator | Saturday 28 March 2026 06:27:01 +0000 (0:00:00.782) 1:13:07.729 ******** 2026-03-28 06:27:18.894370 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894382 | orchestrator | 2026-03-28 06:27:18.894392 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 06:27:18.894403 | orchestrator | Saturday 28 March 2026 06:27:02 +0000 (0:00:00.830) 1:13:08.560 ******** 2026-03-28 06:27:18.894414 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894424 | orchestrator | 2026-03-28 06:27:18.894435 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 06:27:18.894446 | orchestrator | Saturday 28 March 2026 06:27:02 +0000 (0:00:00.817) 1:13:09.378 ******** 2026-03-28 06:27:18.894457 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894467 | orchestrator | 2026-03-28 06:27:18.894478 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 06:27:18.894489 | orchestrator | Saturday 28 March 2026 06:27:03 +0000 (0:00:00.771) 1:13:10.150 ******** 2026-03-28 06:27:18.894500 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894510 | orchestrator | 2026-03-28 06:27:18.894521 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 06:27:18.894532 | orchestrator | Saturday 28 March 2026 06:27:04 +0000 (0:00:00.764) 1:13:10.915 ******** 2026-03-28 06:27:18.894543 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894554 | orchestrator | 2026-03-28 06:27:18.894565 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 06:27:18.894576 | orchestrator | Saturday 28 March 2026 06:27:05 +0000 (0:00:00.838) 1:13:11.753 ******** 2026-03-28 06:27:18.894586 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894604 | orchestrator | 2026-03-28 06:27:18.894615 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 06:27:18.894626 | orchestrator | Saturday 28 March 2026 06:27:06 +0000 (0:00:00.807) 1:13:12.561 ******** 2026-03-28 06:27:18.894637 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894648 | orchestrator | 2026-03-28 06:27:18.894659 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 06:27:18.894670 | orchestrator | Saturday 28 March 2026 06:27:06 +0000 (0:00:00.806) 1:13:13.368 ******** 2026-03-28 06:27:18.894680 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-28 06:27:18.894691 | orchestrator | 2026-03-28 06:27:18.894702 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 06:27:18.894713 | orchestrator | Saturday 28 March 2026 06:27:11 +0000 (0:00:04.101) 1:13:17.469 ******** 2026-03-28 06:27:18.894724 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:27:18.894735 | orchestrator | 2026-03-28 06:27:18.894745 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 06:27:18.894756 | orchestrator | Saturday 28 March 2026 06:27:11 +0000 (0:00:00.913) 1:13:18.383 ******** 2026-03-28 06:27:18.894770 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-28 06:27:18.894784 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-28 06:27:18.894804 | orchestrator | 2026-03-28 06:27:18.894815 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 06:27:18.894826 | orchestrator | Saturday 28 March 2026 06:27:16 +0000 (0:00:04.484) 1:13:22.867 ******** 2026-03-28 06:27:18.894836 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894847 | orchestrator | 2026-03-28 06:27:18.894858 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 06:27:18.894869 | orchestrator | Saturday 28 March 2026 06:27:17 +0000 (0:00:00.846) 1:13:23.714 ******** 2026-03-28 06:27:18.894879 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894890 | orchestrator | 2026-03-28 06:27:18.894901 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 06:27:18.894938 | orchestrator | Saturday 28 March 2026 06:27:18 +0000 (0:00:00.787) 1:13:24.502 ******** 2026-03-28 06:27:18.894951 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:27:18.894962 | orchestrator | 2026-03-28 06:27:18.894973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 06:27:18.894992 | orchestrator | Saturday 28 March 2026 06:27:18 +0000 (0:00:00.810) 1:13:25.313 ******** 2026-03-28 06:28:23.908755 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.908877 | orchestrator | 2026-03-28 06:28:23.908896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 06:28:23.908910 | orchestrator | Saturday 28 March 2026 06:27:19 +0000 (0:00:00.832) 1:13:26.145 ******** 2026-03-28 06:28:23.908922 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.908933 | orchestrator | 2026-03-28 06:28:23.908945 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 06:28:23.908956 | orchestrator | Saturday 28 March 2026 06:27:20 +0000 (0:00:00.853) 1:13:26.999 ******** 2026-03-28 06:28:23.908967 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:28:23.908980 | orchestrator | 2026-03-28 06:28:23.908991 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 06:28:23.909002 | orchestrator | Saturday 28 March 2026 06:27:21 +0000 (0:00:00.866) 1:13:27.866 ******** 2026-03-28 06:28:23.909014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:28:23.909025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:28:23.909036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:28:23.909047 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.909059 | orchestrator | 2026-03-28 06:28:23.909070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 06:28:23.909081 | orchestrator | Saturday 28 March 2026 06:27:22 +0000 (0:00:01.137) 1:13:29.004 ******** 2026-03-28 06:28:23.909092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:28:23.909103 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:28:23.909113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:28:23.909180 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.909191 | orchestrator | 2026-03-28 06:28:23.909203 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 06:28:23.909214 | orchestrator | Saturday 28 March 2026 06:27:23 +0000 (0:00:01.097) 1:13:30.102 ******** 2026-03-28 06:28:23.909225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 06:28:23.909236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 06:28:23.909247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 06:28:23.909258 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.909268 | orchestrator | 2026-03-28 06:28:23.909296 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 06:28:23.909332 | orchestrator | Saturday 28 March 2026 06:27:24 +0000 (0:00:01.084) 1:13:31.186 ******** 2026-03-28 06:28:23.909345 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:28:23.909359 | orchestrator | 2026-03-28 06:28:23.909371 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 06:28:23.909384 | orchestrator | Saturday 28 March 2026 06:27:25 +0000 (0:00:00.878) 1:13:32.065 ******** 2026-03-28 06:28:23.909396 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 06:28:23.909408 | orchestrator | 2026-03-28 06:28:23.909421 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 06:28:23.909433 | orchestrator | Saturday 28 March 2026 06:27:27 +0000 (0:00:01.576) 1:13:33.641 ******** 2026-03-28 06:28:23.909446 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:28:23.909458 | orchestrator | 2026-03-28 06:28:23.909471 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 06:28:23.909483 | orchestrator | Saturday 28 March 2026 06:27:28 +0000 (0:00:01.423) 1:13:35.065 ******** 2026-03-28 06:28:23.909496 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-28 06:28:23.909509 | orchestrator | 2026-03-28 06:28:23.909522 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:28:23.909535 | orchestrator | Saturday 28 March 2026 06:27:29 +0000 (0:00:01.100) 1:13:36.166 ******** 2026-03-28 06:28:23.909548 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:28:23.909559 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:28:23.909570 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:28:23.909581 | orchestrator | 2026-03-28 06:28:23.909591 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:28:23.909602 | orchestrator | Saturday 28 March 2026 06:27:32 +0000 (0:00:03.178) 1:13:39.344 ******** 2026-03-28 06:28:23.909613 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-28 06:28:23.909624 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 06:28:23.909635 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:28:23.909646 | orchestrator | 2026-03-28 06:28:23.909657 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 06:28:23.909668 | orchestrator | Saturday 28 March 2026 06:27:34 +0000 (0:00:01.981) 1:13:41.326 ******** 2026-03-28 06:28:23.909679 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.909689 | orchestrator | 2026-03-28 06:28:23.909700 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 06:28:23.909711 | orchestrator | Saturday 28 March 2026 06:27:35 +0000 (0:00:00.804) 1:13:42.130 ******** 2026-03-28 06:28:23.909722 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-28 06:28:23.909734 | orchestrator | 2026-03-28 06:28:23.909744 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 06:28:23.909755 | orchestrator | Saturday 28 March 2026 06:27:36 +0000 (0:00:01.140) 1:13:43.271 ******** 2026-03-28 06:28:23.909767 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:28:23.909779 | orchestrator | 2026-03-28 06:28:23.909790 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 06:28:23.909801 | orchestrator | Saturday 28 March 2026 06:27:38 +0000 (0:00:01.659) 1:13:44.931 ******** 2026-03-28 06:28:23.909829 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:28:23.909842 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 06:28:23.909853 | orchestrator | 2026-03-28 06:28:23.909864 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 06:28:23.909875 | orchestrator | Saturday 28 March 2026 06:27:43 +0000 (0:00:05.100) 1:13:50.032 ******** 2026-03-28 06:28:23.909893 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 06:28:23.909904 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 06:28:23.909915 | orchestrator | 2026-03-28 06:28:23.909926 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 06:28:23.909937 | orchestrator | Saturday 28 March 2026 06:27:46 +0000 (0:00:03.158) 1:13:53.190 ******** 2026-03-28 06:28:23.909948 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-28 06:28:23.909959 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:28:23.909970 | orchestrator | 2026-03-28 06:28:23.909981 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 06:28:23.909992 | orchestrator | Saturday 28 March 2026 06:27:48 +0000 (0:00:01.713) 1:13:54.904 ******** 2026-03-28 06:28:23.910003 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-28 06:28:23.910013 | orchestrator | 2026-03-28 06:28:23.910090 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 06:28:23.910102 | orchestrator | Saturday 28 March 2026 06:27:49 +0000 (0:00:01.203) 1:13:56.108 ******** 2026-03-28 06:28:23.910113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910205 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.910216 | orchestrator | 2026-03-28 06:28:23.910227 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 06:28:23.910238 | orchestrator | Saturday 28 March 2026 06:27:51 +0000 (0:00:01.628) 1:13:57.737 ******** 2026-03-28 06:28:23.910249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 06:28:23.910304 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.910315 | orchestrator | 2026-03-28 06:28:23.910326 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 06:28:23.910337 | orchestrator | Saturday 28 March 2026 06:27:52 +0000 (0:00:01.673) 1:13:59.410 ******** 2026-03-28 06:28:23.910348 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:28:23.910359 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:28:23.910370 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:28:23.910389 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:28:23.910401 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 06:28:23.910412 | orchestrator | 2026-03-28 06:28:23.910423 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 06:28:23.910434 | orchestrator | Saturday 28 March 2026 06:28:23 +0000 (0:00:30.144) 1:14:29.555 ******** 2026-03-28 06:28:23.910445 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:28:23.910456 | orchestrator | 2026-03-28 06:28:23.910467 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 06:28:23.910486 | orchestrator | Saturday 28 March 2026 06:28:23 +0000 (0:00:00.775) 1:14:30.331 ******** 2026-03-28 06:29:17.205613 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.205799 | orchestrator | 2026-03-28 06:29:17.205821 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 06:29:17.205834 | orchestrator | Saturday 28 March 2026 06:28:24 +0000 (0:00:00.777) 1:14:31.109 ******** 2026-03-28 06:29:17.205845 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-28 06:29:17.205857 | orchestrator | 2026-03-28 06:29:17.205868 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 06:29:17.205879 | orchestrator | Saturday 28 March 2026 06:28:25 +0000 (0:00:01.203) 1:14:32.313 ******** 2026-03-28 06:29:17.205890 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-28 06:29:17.205901 | orchestrator | 2026-03-28 06:29:17.205913 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 06:29:17.205936 | orchestrator | Saturday 28 March 2026 06:28:27 +0000 (0:00:01.189) 1:14:33.503 ******** 2026-03-28 06:29:17.205960 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.205972 | orchestrator | 2026-03-28 06:29:17.205984 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 06:29:17.205995 | orchestrator | Saturday 28 March 2026 06:28:29 +0000 (0:00:02.038) 1:14:35.541 ******** 2026-03-28 06:29:17.206006 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.206100 | orchestrator | 2026-03-28 06:29:17.206122 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 06:29:17.206143 | orchestrator | Saturday 28 March 2026 06:28:31 +0000 (0:00:01.991) 1:14:37.533 ******** 2026-03-28 06:29:17.206163 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.206184 | orchestrator | 2026-03-28 06:29:17.206204 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 06:29:17.206225 | orchestrator | Saturday 28 March 2026 06:28:33 +0000 (0:00:02.281) 1:14:39.814 ******** 2026-03-28 06:29:17.206239 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 06:29:17.206251 | orchestrator | 2026-03-28 06:29:17.206262 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-28 06:29:17.206296 | orchestrator | skipping: no hosts matched 2026-03-28 06:29:17.206307 | orchestrator | 2026-03-28 06:29:17.206318 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-28 06:29:17.206329 | orchestrator | skipping: no hosts matched 2026-03-28 06:29:17.206357 | orchestrator | 2026-03-28 06:29:17.206369 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-28 06:29:17.206380 | orchestrator | skipping: no hosts matched 2026-03-28 06:29:17.206391 | orchestrator | 2026-03-28 06:29:17.206401 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-28 06:29:17.206412 | orchestrator | 2026-03-28 06:29:17.206423 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-28 06:29:17.206434 | orchestrator | Saturday 28 March 2026 06:28:37 +0000 (0:00:04.207) 1:14:44.021 ******** 2026-03-28 06:29:17.206468 | orchestrator | changed: [testbed-node-0] 2026-03-28 06:29:17.206479 | orchestrator | changed: [testbed-node-1] 2026-03-28 06:29:17.206490 | orchestrator | changed: [testbed-node-2] 2026-03-28 06:29:17.206501 | orchestrator | changed: [testbed-node-3] 2026-03-28 06:29:17.206511 | orchestrator | changed: [testbed-node-4] 2026-03-28 06:29:17.206522 | orchestrator | changed: [testbed-node-5] 2026-03-28 06:29:17.206533 | orchestrator | 2026-03-28 06:29:17.206544 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-28 06:29:17.206555 | orchestrator | Saturday 28 March 2026 06:28:40 +0000 (0:00:02.711) 1:14:46.733 ******** 2026-03-28 06:29:17.206566 | orchestrator | changed: [testbed-node-1] 2026-03-28 06:29:17.206576 | orchestrator | changed: [testbed-node-0] 2026-03-28 06:29:17.206587 | orchestrator | changed: [testbed-node-3] 2026-03-28 06:29:17.206598 | orchestrator | changed: [testbed-node-2] 2026-03-28 06:29:17.206608 | orchestrator | changed: [testbed-node-4] 2026-03-28 06:29:17.206619 | orchestrator | changed: [testbed-node-5] 2026-03-28 06:29:17.206630 | orchestrator | 2026-03-28 06:29:17.206641 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:29:17.206652 | orchestrator | Saturday 28 March 2026 06:28:43 +0000 (0:00:03.376) 1:14:50.110 ******** 2026-03-28 06:29:17.206662 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.206673 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.206684 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.206694 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.206705 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.206716 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.206727 | orchestrator | 2026-03-28 06:29:17.206738 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:29:17.206748 | orchestrator | Saturday 28 March 2026 06:28:46 +0000 (0:00:02.513) 1:14:52.623 ******** 2026-03-28 06:29:17.206759 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.206770 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.206780 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.206791 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.206802 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.206812 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.206823 | orchestrator | 2026-03-28 06:29:17.206834 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 06:29:17.206845 | orchestrator | Saturday 28 March 2026 06:28:48 +0000 (0:00:02.157) 1:14:54.780 ******** 2026-03-28 06:29:17.206857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 06:29:17.206870 | orchestrator | 2026-03-28 06:29:17.206881 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 06:29:17.206892 | orchestrator | Saturday 28 March 2026 06:28:50 +0000 (0:00:02.368) 1:14:57.149 ******** 2026-03-28 06:29:17.206903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 06:29:17.206913 | orchestrator | 2026-03-28 06:29:17.206945 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 06:29:17.206957 | orchestrator | Saturday 28 March 2026 06:28:53 +0000 (0:00:02.368) 1:14:59.518 ******** 2026-03-28 06:29:17.206968 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.206979 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.206989 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.207000 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.207011 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.207021 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.207032 | orchestrator | 2026-03-28 06:29:17.207046 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 06:29:17.207065 | orchestrator | Saturday 28 March 2026 06:28:55 +0000 (0:00:02.518) 1:15:02.036 ******** 2026-03-28 06:29:17.207094 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.207112 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.207128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.207145 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.207163 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.207179 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.207195 | orchestrator | 2026-03-28 06:29:17.207214 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 06:29:17.207231 | orchestrator | Saturday 28 March 2026 06:28:57 +0000 (0:00:02.185) 1:15:04.222 ******** 2026-03-28 06:29:17.207248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.207264 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.207323 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.207340 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.207356 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.207373 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.207393 | orchestrator | 2026-03-28 06:29:17.207411 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 06:29:17.207430 | orchestrator | Saturday 28 March 2026 06:29:00 +0000 (0:00:02.623) 1:15:06.846 ******** 2026-03-28 06:29:17.207450 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.207467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.207486 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.207498 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.207509 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.207520 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.207531 | orchestrator | 2026-03-28 06:29:17.207541 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 06:29:17.207552 | orchestrator | Saturday 28 March 2026 06:29:02 +0000 (0:00:02.196) 1:15:09.042 ******** 2026-03-28 06:29:17.207563 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.207574 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.207593 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.207605 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.207615 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.207626 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.207637 | orchestrator | 2026-03-28 06:29:17.207648 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 06:29:17.207659 | orchestrator | Saturday 28 March 2026 06:29:04 +0000 (0:00:02.231) 1:15:11.274 ******** 2026-03-28 06:29:17.207669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.207680 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.207691 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.207702 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.207713 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.207723 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.207734 | orchestrator | 2026-03-28 06:29:17.207745 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 06:29:17.207756 | orchestrator | Saturday 28 March 2026 06:29:06 +0000 (0:00:01.846) 1:15:13.120 ******** 2026-03-28 06:29:17.207767 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.207777 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.207788 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.207799 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.207809 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.207820 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.207830 | orchestrator | 2026-03-28 06:29:17.207841 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 06:29:17.207852 | orchestrator | Saturday 28 March 2026 06:29:08 +0000 (0:00:01.860) 1:15:14.980 ******** 2026-03-28 06:29:17.207863 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.207874 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.207885 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.207895 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.207916 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.207926 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.207937 | orchestrator | 2026-03-28 06:29:17.207948 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 06:29:17.207959 | orchestrator | Saturday 28 March 2026 06:29:11 +0000 (0:00:02.456) 1:15:17.437 ******** 2026-03-28 06:29:17.207970 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.207980 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.207991 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.208001 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:29:17.208012 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:29:17.208022 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:29:17.208033 | orchestrator | 2026-03-28 06:29:17.208044 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 06:29:17.208054 | orchestrator | Saturday 28 March 2026 06:29:13 +0000 (0:00:02.076) 1:15:19.514 ******** 2026-03-28 06:29:17.208065 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:29:17.208076 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:29:17.208087 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:29:17.208098 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.208108 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.208119 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.208129 | orchestrator | 2026-03-28 06:29:17.208140 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 06:29:17.208151 | orchestrator | Saturday 28 March 2026 06:29:15 +0000 (0:00:02.181) 1:15:21.696 ******** 2026-03-28 06:29:17.208162 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:29:17.208173 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:29:17.208184 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:29:17.208195 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:29:17.208205 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:29:17.208216 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:29:17.208227 | orchestrator | 2026-03-28 06:29:17.208250 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 06:30:12.403583 | orchestrator | Saturday 28 March 2026 06:29:17 +0000 (0:00:01.927) 1:15:23.623 ******** 2026-03-28 06:30:12.403684 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.403701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.403714 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.403726 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.403737 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.403748 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.403759 | orchestrator | 2026-03-28 06:30:12.403771 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 06:30:12.403782 | orchestrator | Saturday 28 March 2026 06:29:19 +0000 (0:00:02.221) 1:15:25.845 ******** 2026-03-28 06:30:12.403793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.403805 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.403815 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.403826 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.403837 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.403848 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.403859 | orchestrator | 2026-03-28 06:30:12.403870 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 06:30:12.403881 | orchestrator | Saturday 28 March 2026 06:29:21 +0000 (0:00:01.879) 1:15:27.724 ******** 2026-03-28 06:30:12.403892 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.403903 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.403913 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.403924 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.403935 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.403946 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.403957 | orchestrator | 2026-03-28 06:30:12.403968 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 06:30:12.403979 | orchestrator | Saturday 28 March 2026 06:29:23 +0000 (0:00:02.127) 1:15:29.852 ******** 2026-03-28 06:30:12.404012 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.404024 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.404035 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.404046 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.404057 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.404067 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.404078 | orchestrator | 2026-03-28 06:30:12.404089 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 06:30:12.404100 | orchestrator | Saturday 28 March 2026 06:29:25 +0000 (0:00:01.829) 1:15:31.682 ******** 2026-03-28 06:30:12.404123 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.404136 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.404149 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.404162 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.404175 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.404187 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.404199 | orchestrator | 2026-03-28 06:30:12.404213 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 06:30:12.404234 | orchestrator | Saturday 28 March 2026 06:29:27 +0000 (0:00:01.844) 1:15:33.527 ******** 2026-03-28 06:30:12.404253 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404274 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.404295 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.404315 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.404328 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.404339 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.404350 | orchestrator | 2026-03-28 06:30:12.404360 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 06:30:12.404371 | orchestrator | Saturday 28 March 2026 06:29:28 +0000 (0:00:01.799) 1:15:35.326 ******** 2026-03-28 06:30:12.404382 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404393 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.404404 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.404414 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.404457 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.404468 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.404479 | orchestrator | 2026-03-28 06:30:12.404490 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 06:30:12.404501 | orchestrator | Saturday 28 March 2026 06:29:30 +0000 (0:00:02.038) 1:15:37.364 ******** 2026-03-28 06:30:12.404512 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404522 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.404533 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.404544 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.404555 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.404565 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.404576 | orchestrator | 2026-03-28 06:30:12.404587 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-28 06:30:12.404598 | orchestrator | Saturday 28 March 2026 06:29:33 +0000 (0:00:02.261) 1:15:39.626 ******** 2026-03-28 06:30:12.404609 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404620 | orchestrator | 2026-03-28 06:30:12.404631 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-28 06:30:12.404641 | orchestrator | Saturday 28 March 2026 06:29:36 +0000 (0:00:03.508) 1:15:43.134 ******** 2026-03-28 06:30:12.404652 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404663 | orchestrator | 2026-03-28 06:30:12.404674 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-28 06:30:12.404685 | orchestrator | Saturday 28 March 2026 06:29:39 +0000 (0:00:03.045) 1:15:46.179 ******** 2026-03-28 06:30:12.404696 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404707 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.404718 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.404728 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.404755 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.404773 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.404784 | orchestrator | 2026-03-28 06:30:12.404795 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-28 06:30:12.404806 | orchestrator | Saturday 28 March 2026 06:29:42 +0000 (0:00:02.935) 1:15:49.115 ******** 2026-03-28 06:30:12.404817 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404828 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.404839 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.404850 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.404861 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.404872 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.404883 | orchestrator | 2026-03-28 06:30:12.404894 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-28 06:30:12.404925 | orchestrator | Saturday 28 March 2026 06:29:44 +0000 (0:00:02.075) 1:15:51.191 ******** 2026-03-28 06:30:12.404937 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 06:30:12.404949 | orchestrator | 2026-03-28 06:30:12.404961 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-28 06:30:12.404972 | orchestrator | Saturday 28 March 2026 06:29:47 +0000 (0:00:02.514) 1:15:53.705 ******** 2026-03-28 06:30:12.404983 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.404994 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.405005 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.405024 | orchestrator | ok: [testbed-node-3] 2026-03-28 06:30:12.405043 | orchestrator | ok: [testbed-node-4] 2026-03-28 06:30:12.405060 | orchestrator | ok: [testbed-node-5] 2026-03-28 06:30:12.405079 | orchestrator | 2026-03-28 06:30:12.405099 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-28 06:30:12.405119 | orchestrator | Saturday 28 March 2026 06:29:50 +0000 (0:00:02.975) 1:15:56.681 ******** 2026-03-28 06:30:12.405133 | orchestrator | changed: [testbed-node-0] 2026-03-28 06:30:12.405144 | orchestrator | changed: [testbed-node-4] 2026-03-28 06:30:12.405155 | orchestrator | changed: [testbed-node-3] 2026-03-28 06:30:12.405166 | orchestrator | changed: [testbed-node-1] 2026-03-28 06:30:12.405177 | orchestrator | changed: [testbed-node-5] 2026-03-28 06:30:12.405188 | orchestrator | changed: [testbed-node-2] 2026-03-28 06:30:12.405199 | orchestrator | 2026-03-28 06:30:12.405210 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-28 06:30:12.405221 | orchestrator | 2026-03-28 06:30:12.405232 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:30:12.405243 | orchestrator | Saturday 28 March 2026 06:29:54 +0000 (0:00:04.660) 1:16:01.342 ******** 2026-03-28 06:30:12.405254 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.405265 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.405276 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.405287 | orchestrator | 2026-03-28 06:30:12.405298 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:30:12.405309 | orchestrator | Saturday 28 March 2026 06:29:56 +0000 (0:00:01.767) 1:16:03.109 ******** 2026-03-28 06:30:12.405320 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.405331 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:30:12.405342 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:30:12.405353 | orchestrator | 2026-03-28 06:30:12.405371 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-28 06:30:12.405383 | orchestrator | Saturday 28 March 2026 06:29:58 +0000 (0:00:01.727) 1:16:04.837 ******** 2026-03-28 06:30:12.405394 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:30:12.405405 | orchestrator | 2026-03-28 06:30:12.405432 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-28 06:30:12.405444 | orchestrator | Saturday 28 March 2026 06:30:00 +0000 (0:00:02.281) 1:16:07.119 ******** 2026-03-28 06:30:12.405455 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405473 | orchestrator | 2026-03-28 06:30:12.405485 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-28 06:30:12.405495 | orchestrator | 2026-03-28 06:30:12.405506 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-28 06:30:12.405517 | orchestrator | Saturday 28 March 2026 06:30:02 +0000 (0:00:01.992) 1:16:09.111 ******** 2026-03-28 06:30:12.405528 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405539 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.405550 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.405561 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.405571 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.405582 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.405593 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:30:12.405604 | orchestrator | 2026-03-28 06:30:12.405615 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:30:12.405626 | orchestrator | Saturday 28 March 2026 06:30:04 +0000 (0:00:02.265) 1:16:11.377 ******** 2026-03-28 06:30:12.405637 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405648 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.405658 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.405669 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.405680 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.405691 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.405701 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:30:12.405712 | orchestrator | 2026-03-28 06:30:12.405723 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-28 06:30:12.405734 | orchestrator | Saturday 28 March 2026 06:30:07 +0000 (0:00:02.441) 1:16:13.818 ******** 2026-03-28 06:30:12.405745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405756 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.405767 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.405778 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.405789 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.405799 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.405810 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:30:12.405821 | orchestrator | 2026-03-28 06:30:12.405832 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-28 06:30:12.405843 | orchestrator | Saturday 28 March 2026 06:30:09 +0000 (0:00:02.174) 1:16:15.993 ******** 2026-03-28 06:30:12.405854 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405864 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.405875 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.405886 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:30:12.405897 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:30:12.405907 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:30:12.405918 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:30:12.405929 | orchestrator | 2026-03-28 06:30:12.405940 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-28 06:30:12.405951 | orchestrator | Saturday 28 March 2026 06:30:11 +0000 (0:00:02.130) 1:16:18.124 ******** 2026-03-28 06:30:12.405962 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:30:12.405973 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:30:12.405983 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:30:12.406002 | orchestrator | skipping: [testbed-node-3] 2026-03-28 06:31:01.358928 | orchestrator | skipping: [testbed-node-4] 2026-03-28 06:31:01.359042 | orchestrator | skipping: [testbed-node-5] 2026-03-28 06:31:01.359059 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359072 | orchestrator | 2026-03-28 06:31:01.359086 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-28 06:31:01.359099 | orchestrator | 2026-03-28 06:31:01.359111 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-28 06:31:01.359123 | orchestrator | Saturday 28 March 2026 06:30:14 +0000 (0:00:02.746) 1:16:20.870 ******** 2026-03-28 06:31:01.359161 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-28 06:31:01.359174 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-28 06:31:01.359186 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-28 06:31:01.359197 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359209 | orchestrator | 2026-03-28 06:31:01.359221 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-28 06:31:01.359233 | orchestrator | Saturday 28 March 2026 06:30:15 +0000 (0:00:01.241) 1:16:22.111 ******** 2026-03-28 06:31:01.359245 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359256 | orchestrator | 2026-03-28 06:31:01.359268 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-28 06:31:01.359280 | orchestrator | Saturday 28 March 2026 06:30:16 +0000 (0:00:01.087) 1:16:23.199 ******** 2026-03-28 06:31:01.359291 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359303 | orchestrator | 2026-03-28 06:31:01.359314 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-28 06:31:01.359326 | orchestrator | Saturday 28 March 2026 06:30:17 +0000 (0:00:01.097) 1:16:24.297 ******** 2026-03-28 06:31:01.359338 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359349 | orchestrator | 2026-03-28 06:31:01.359361 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-28 06:31:01.359372 | orchestrator | Saturday 28 March 2026 06:30:18 +0000 (0:00:01.123) 1:16:25.420 ******** 2026-03-28 06:31:01.359384 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359396 | orchestrator | 2026-03-28 06:31:01.359408 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-28 06:31:01.359420 | orchestrator | Saturday 28 March 2026 06:30:20 +0000 (0:00:01.120) 1:16:26.541 ******** 2026-03-28 06:31:01.359445 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-28 06:31:01.359459 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-28 06:31:01.359472 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359486 | orchestrator | 2026-03-28 06:31:01.359499 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-28 06:31:01.359513 | orchestrator | Saturday 28 March 2026 06:30:21 +0000 (0:00:01.154) 1:16:27.695 ******** 2026-03-28 06:31:01.359526 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359563 | orchestrator | 2026-03-28 06:31:01.359577 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-28 06:31:01.359590 | orchestrator | Saturday 28 March 2026 06:30:22 +0000 (0:00:01.232) 1:16:28.928 ******** 2026-03-28 06:31:01.359603 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359615 | orchestrator | 2026-03-28 06:31:01.359628 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-28 06:31:01.359641 | orchestrator | Saturday 28 March 2026 06:30:23 +0000 (0:00:01.160) 1:16:30.088 ******** 2026-03-28 06:31:01.359655 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359667 | orchestrator | 2026-03-28 06:31:01.359681 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-28 06:31:01.359693 | orchestrator | Saturday 28 March 2026 06:30:24 +0000 (0:00:01.295) 1:16:31.383 ******** 2026-03-28 06:31:01.359706 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-28 06:31:01.359720 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-28 06:31:01.359733 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359746 | orchestrator | 2026-03-28 06:31:01.359759 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-28 06:31:01.359773 | orchestrator | Saturday 28 March 2026 06:30:26 +0000 (0:00:01.164) 1:16:32.548 ******** 2026-03-28 06:31:01.359787 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359800 | orchestrator | 2026-03-28 06:31:01.359813 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-28 06:31:01.359834 | orchestrator | Saturday 28 March 2026 06:30:27 +0000 (0:00:01.162) 1:16:33.710 ******** 2026-03-28 06:31:01.359845 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359857 | orchestrator | 2026-03-28 06:31:01.359869 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-28 06:31:01.359880 | orchestrator | Saturday 28 March 2026 06:30:28 +0000 (0:00:01.152) 1:16:34.862 ******** 2026-03-28 06:31:01.359891 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359903 | orchestrator | 2026-03-28 06:31:01.359914 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-28 06:31:01.359925 | orchestrator | Saturday 28 March 2026 06:30:29 +0000 (0:00:01.194) 1:16:36.057 ******** 2026-03-28 06:31:01.359936 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:01.359948 | orchestrator | 2026-03-28 06:31:01.359959 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-28 06:31:01.359970 | orchestrator | 2026-03-28 06:31:01.359981 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 06:31:01.359992 | orchestrator | Saturday 28 March 2026 06:30:31 +0000 (0:00:01.609) 1:16:37.666 ******** 2026-03-28 06:31:01.360003 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360015 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360026 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360037 | orchestrator | 2026-03-28 06:31:01.360049 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-28 06:31:01.360060 | orchestrator | Saturday 28 March 2026 06:30:33 +0000 (0:00:01.795) 1:16:39.461 ******** 2026-03-28 06:31:01.360071 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360112 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360124 | orchestrator | 2026-03-28 06:31:01.360136 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-28 06:31:01.360148 | orchestrator | Saturday 28 March 2026 06:30:34 +0000 (0:00:01.426) 1:16:40.888 ******** 2026-03-28 06:31:01.360159 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360171 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360182 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360207 | orchestrator | 2026-03-28 06:31:01.360219 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-28 06:31:01.360230 | orchestrator | Saturday 28 March 2026 06:30:35 +0000 (0:00:01.456) 1:16:42.345 ******** 2026-03-28 06:31:01.360242 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360265 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360277 | orchestrator | 2026-03-28 06:31:01.360288 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-28 06:31:01.360299 | orchestrator | Saturday 28 March 2026 06:30:37 +0000 (0:00:01.422) 1:16:43.768 ******** 2026-03-28 06:31:01.360311 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360323 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360334 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360345 | orchestrator | 2026-03-28 06:31:01.360357 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-28 06:31:01.360369 | orchestrator | Saturday 28 March 2026 06:30:38 +0000 (0:00:01.376) 1:16:45.145 ******** 2026-03-28 06:31:01.360380 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360391 | orchestrator | skipping: [testbed-node-1] 2026-03-28 06:31:01.360403 | orchestrator | skipping: [testbed-node-2] 2026-03-28 06:31:01.360414 | orchestrator | 2026-03-28 06:31:01.360426 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-28 06:31:01.360437 | orchestrator | Saturday 28 March 2026 06:30:40 +0000 (0:00:01.451) 1:16:46.596 ******** 2026-03-28 06:31:01.360449 | orchestrator | skipping: [testbed-node-0] 2026-03-28 06:31:01.360461 | orchestrator | 2026-03-28 06:31:01.360472 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-28 06:31:01.360490 | orchestrator | 2026-03-28 06:31:01.360502 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 06:31:01.360513 | orchestrator | Saturday 28 March 2026 06:30:42 +0000 (0:00:01.883) 1:16:48.480 ******** 2026-03-28 06:31:01.360530 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360577 | orchestrator | 2026-03-28 06:31:01.360588 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 06:31:01.360599 | orchestrator | Saturday 28 March 2026 06:30:43 +0000 (0:00:01.494) 1:16:49.975 ******** 2026-03-28 06:31:01.360610 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360621 | orchestrator | 2026-03-28 06:31:01.360632 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-28 06:31:01.360643 | orchestrator | Saturday 28 March 2026 06:30:44 +0000 (0:00:01.138) 1:16:51.113 ******** 2026-03-28 06:31:01.360654 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360664 | orchestrator | 2026-03-28 06:31:01.360675 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-28 06:31:01.360686 | orchestrator | Saturday 28 March 2026 06:30:45 +0000 (0:00:01.269) 1:16:52.382 ******** 2026-03-28 06:31:01.360697 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360707 | orchestrator | 2026-03-28 06:31:01.360718 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-28 06:31:01.360729 | orchestrator | Saturday 28 March 2026 06:30:48 +0000 (0:00:02.925) 1:16:55.308 ******** 2026-03-28 06:31:01.360740 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360751 | orchestrator | 2026-03-28 06:31:01.360761 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-28 06:31:01.360772 | orchestrator | Saturday 28 March 2026 06:30:51 +0000 (0:00:03.058) 1:16:58.367 ******** 2026-03-28 06:31:01.360783 | orchestrator | changed: [testbed-node-0] 2026-03-28 06:31:01.360794 | orchestrator | 2026-03-28 06:31:01.360805 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-28 06:31:01.360816 | orchestrator | 2026-03-28 06:31:01.360826 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-28 06:31:01.360837 | orchestrator | Saturday 28 March 2026 06:30:53 +0000 (0:00:01.829) 1:17:00.196 ******** 2026-03-28 06:31:01.360848 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360859 | orchestrator | ok: [testbed-node-1] 2026-03-28 06:31:01.360869 | orchestrator | ok: [testbed-node-2] 2026-03-28 06:31:01.360880 | orchestrator | 2026-03-28 06:31:01.360891 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-28 06:31:01.360902 | orchestrator | Saturday 28 March 2026 06:30:55 +0000 (0:00:01.846) 1:17:02.043 ******** 2026-03-28 06:31:01.360913 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360924 | orchestrator | 2026-03-28 06:31:01.360934 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-28 06:31:01.360945 | orchestrator | Saturday 28 March 2026 06:30:57 +0000 (0:00:02.311) 1:17:04.355 ******** 2026-03-28 06:31:01.360956 | orchestrator | ok: [testbed-node-0] 2026-03-28 06:31:01.360967 | orchestrator | 2026-03-28 06:31:01.360977 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 06:31:01.360989 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 06:31:01.361002 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-28 06:31:01.361021 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-03-28 06:31:01.361041 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-03-28 06:31:01.361069 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-28 06:31:02.114821 | orchestrator | testbed-node-3 : ok=311  changed=22  unreachable=0 failed=0 skipped=348  rescued=0 ignored=0 2026-03-28 06:31:02.114926 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-03-28 06:31:02.114942 | orchestrator | testbed-node-5 : ok=309  changed=17  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-03-28 06:31:02.114954 | orchestrator | 2026-03-28 06:31:02.114966 | orchestrator | 2026-03-28 06:31:02.114977 | orchestrator | 2026-03-28 06:31:02.114988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 06:31:02.115000 | orchestrator | Saturday 28 March 2026 06:31:01 +0000 (0:00:03.408) 1:17:07.764 ******** 2026-03-28 06:31:02.115011 | orchestrator | =============================================================================== 2026-03-28 06:31:02.115022 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 75.79s 2026-03-28 06:31:02.115033 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.64s 2026-03-28 06:31:02.115043 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.95s 2026-03-28 06:31:02.115054 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.19s 2026-03-28 06:31:02.115065 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.33s 2026-03-28 06:31:02.115076 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 30.28s 2026-03-28 06:31:02.115086 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.14s 2026-03-28 06:31:02.115097 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 28.79s 2026-03-28 06:31:02.115108 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 28.05s 2026-03-28 06:31:02.115138 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.99s 2026-03-28 06:31:02.115150 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.98s 2026-03-28 06:31:02.115161 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.69s 2026-03-28 06:31:02.115172 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 17.24s 2026-03-28 06:31:02.115183 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.06s 2026-03-28 06:31:02.115193 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.24s 2026-03-28 06:31:02.115204 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.80s 2026-03-28 06:31:02.115215 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.67s 2026-03-28 06:31:02.115226 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.69s 2026-03-28 06:31:02.115237 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.15s 2026-03-28 06:31:02.115247 | orchestrator | Set cluster configs ---------------------------------------------------- 10.69s 2026-03-28 06:31:02.443030 | orchestrator | + osism apply cephclient 2026-03-28 06:31:04.505088 | orchestrator | 2026-03-28 06:31:04 | INFO  | Task af8610b7-e89e-4fbd-95d5-6b383ae10610 (cephclient) was prepared for execution. 2026-03-28 06:31:04.505189 | orchestrator | 2026-03-28 06:31:04 | INFO  | It takes a moment until task af8610b7-e89e-4fbd-95d5-6b383ae10610 (cephclient) has been started and output is visible here. 2026-03-28 06:31:24.028022 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-28 06:31:24.028144 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-28 06:31:24.028175 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-28 06:31:24.028210 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-28 06:31:24.028235 | orchestrator | 2026-03-28 06:31:24.028247 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-28 06:31:24.028259 | orchestrator | 2026-03-28 06:31:24.028270 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-28 06:31:24.028281 | orchestrator | Saturday 28 March 2026 06:31:11 +0000 (0:00:01.713) 0:00:01.713 ******** 2026-03-28 06:31:24.028292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-28 06:31:24.028304 | orchestrator | 2026-03-28 06:31:24.028316 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-28 06:31:24.028326 | orchestrator | Saturday 28 March 2026 06:31:11 +0000 (0:00:00.845) 0:00:02.559 ******** 2026-03-28 06:31:24.028337 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-28 06:31:24.028348 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-28 06:31:24.028361 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-28 06:31:24.028372 | orchestrator | 2026-03-28 06:31:24.028383 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-28 06:31:24.028393 | orchestrator | Saturday 28 March 2026 06:31:13 +0000 (0:00:01.762) 0:00:04.321 ******** 2026-03-28 06:31:24.028404 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-28 06:31:24.028415 | orchestrator | 2026-03-28 06:31:24.028426 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-28 06:31:24.028437 | orchestrator | Saturday 28 March 2026 06:31:14 +0000 (0:00:01.102) 0:00:05.424 ******** 2026-03-28 06:31:24.028448 | orchestrator | ok: [testbed-manager] 2026-03-28 06:31:24.028459 | orchestrator | 2026-03-28 06:31:24.028470 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-28 06:31:24.028481 | orchestrator | Saturday 28 March 2026 06:31:15 +0000 (0:00:00.947) 0:00:06.371 ******** 2026-03-28 06:31:24.028492 | orchestrator | ok: [testbed-manager] 2026-03-28 06:31:24.028503 | orchestrator | 2026-03-28 06:31:24.028513 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-28 06:31:24.028524 | orchestrator | Saturday 28 March 2026 06:31:16 +0000 (0:00:00.908) 0:00:07.279 ******** 2026-03-28 06:31:24.028535 | orchestrator | ok: [testbed-manager] 2026-03-28 06:31:24.028546 | orchestrator | 2026-03-28 06:31:24.028559 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-28 06:31:24.028572 | orchestrator | Saturday 28 March 2026 06:31:17 +0000 (0:00:01.156) 0:00:08.436 ******** 2026-03-28 06:31:24.028585 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-28 06:31:24.028630 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-28 06:31:24.028643 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-28 06:31:24.028656 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-28 06:31:24.028669 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-28 06:31:24.028682 | orchestrator | 2026-03-28 06:31:24.028695 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-28 06:31:24.028708 | orchestrator | Saturday 28 March 2026 06:31:21 +0000 (0:00:04.101) 0:00:12.538 ******** 2026-03-28 06:31:24.028721 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-28 06:31:24.028733 | orchestrator | 2026-03-28 06:31:24.028744 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-28 06:31:24.028755 | orchestrator | Saturday 28 March 2026 06:31:22 +0000 (0:00:00.562) 0:00:13.100 ******** 2026-03-28 06:31:24.028781 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:24.028793 | orchestrator | 2026-03-28 06:31:24.028804 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-28 06:31:24.028824 | orchestrator | Saturday 28 March 2026 06:31:22 +0000 (0:00:00.187) 0:00:13.288 ******** 2026-03-28 06:31:24.028835 | orchestrator | skipping: [testbed-manager] 2026-03-28 06:31:24.028846 | orchestrator | 2026-03-28 06:31:24.028857 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 06:31:24.028868 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 06:31:24.028880 | orchestrator | 2026-03-28 06:31:24.028891 | orchestrator | 2026-03-28 06:31:24.028902 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 06:31:24.028913 | orchestrator | Saturday 28 March 2026 06:31:23 +0000 (0:00:01.122) 0:00:14.411 ******** 2026-03-28 06:31:24.028924 | orchestrator | =============================================================================== 2026-03-28 06:31:24.028935 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2026-03-28 06:31:24.028946 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.76s 2026-03-28 06:31:24.028957 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.16s 2026-03-28 06:31:24.028968 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.12s 2026-03-28 06:31:24.028979 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.10s 2026-03-28 06:31:24.028990 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-03-28 06:31:24.029017 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-03-28 06:31:24.029028 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.85s 2026-03-28 06:31:24.029039 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.56s 2026-03-28 06:31:24.029050 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.19s 2026-03-28 06:31:24.359090 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-28 06:31:24.359168 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-28 06:31:24.368676 | orchestrator | + set -e 2026-03-28 06:31:24.370112 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 06:31:24.370165 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 06:31:24.370183 | orchestrator | ++ INTERACTIVE=false 2026-03-28 06:31:24.370198 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 06:31:24.370212 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 06:31:24.370227 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 06:31:24.370241 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 06:31:24.370256 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 06:31:24.370271 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 06:31:24.370285 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 06:31:24.370301 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 06:31:24.370316 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 06:31:24.370330 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-28 06:31:24.370344 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-28 06:31:24.370359 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-28 06:31:24.370511 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-28 06:31:24.370529 | orchestrator | ++ export ARA=false 2026-03-28 06:31:24.370544 | orchestrator | ++ ARA=false 2026-03-28 06:31:24.370559 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 06:31:24.370573 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 06:31:24.370677 | orchestrator | ++ export TEMPEST=false 2026-03-28 06:31:24.370712 | orchestrator | ++ TEMPEST=false 2026-03-28 06:31:24.370727 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 06:31:24.370742 | orchestrator | ++ IS_ZUUL=true 2026-03-28 06:31:24.370758 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 06:31:24.370772 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2026-03-28 06:31:24.370786 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 06:31:24.370800 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 06:31:24.370815 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 06:31:24.370830 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 06:31:24.370844 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 06:31:24.370858 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 06:31:24.370872 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 06:31:24.370888 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 06:31:24.370933 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-28 06:31:24.370948 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-28 06:31:24.370964 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 06:31:24.370993 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 06:31:24.377400 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-28 06:31:24.377439 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-28 06:31:24.377449 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 06:31:24.377460 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-28 06:31:46.345411 | orchestrator | 2026-03-28 06:31:46 | ERROR  | Unable to get ansible vault password 2026-03-28 06:31:46.345547 | orchestrator | 2026-03-28 06:31:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 06:31:46.345567 | orchestrator | 2026-03-28 06:31:46 | ERROR  | Dropping encrypted entries 2026-03-28 06:31:46.382488 | orchestrator | 2026-03-28 06:31:46 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-28 06:31:46.383051 | orchestrator | 2026-03-28 06:31:46 | INFO  | Kolla configuration check passed 2026-03-28 06:31:46.582943 | orchestrator | 2026-03-28 06:31:46 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-28 06:31:46.603169 | orchestrator | 2026-03-28 06:31:46 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-28 06:31:46.910470 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-28 06:32:08.238536 | orchestrator | 2026-03-28 06:32:08 | ERROR  | Unable to get ansible vault password 2026-03-28 06:32:08.238654 | orchestrator | 2026-03-28 06:32:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 06:32:08.238736 | orchestrator | 2026-03-28 06:32:08 | ERROR  | Dropping encrypted entries 2026-03-28 06:32:08.278290 | orchestrator | 2026-03-28 06:32:08 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-28 06:32:08.415660 | orchestrator | 2026-03-28 06:32:08 | INFO  | Found 206 classic queue(s) in vhost '/': 2026-03-28 06:32:08.415921 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-28 06:32:08.415948 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-28 06:32:08.415962 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-28 06:32:08.415981 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-28 06:32:08.416012 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican.workers_fanout_11054fc6351648899ff1b3b680a70555 (vhost: /, messages: 0) 2026-03-28 06:32:08.416102 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican.workers_fanout_1e26cf520a8a416ea126a13676260fc2 (vhost: /, messages: 0) 2026-03-28 06:32:08.416115 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican.workers_fanout_8b4067d673464ebaa452000f22429552 (vhost: /, messages: 0) 2026-03-28 06:32:08.416126 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-28 06:32:08.416136 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central (vhost: /, messages: 0) 2026-03-28 06:32:08.416146 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.416156 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.416166 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.416221 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_2e5dabf6d1a042e2b08f7ea75b9e293f (vhost: /, messages: 0) 2026-03-28 06:32:08.416234 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_7bdc6a6b35ca4c0dab06248506a28dfd (vhost: /, messages: 0) 2026-03-28 06:32:08.416244 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_c47df5070b1242a2be62eb908c64b16a (vhost: /, messages: 0) 2026-03-28 06:32:08.416254 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_f0a5afd3874a4f3a9a41a3a2b284686d (vhost: /, messages: 0) 2026-03-28 06:32:08.416264 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_fb61865e025246df9f7cf4ad8c8a4615 (vhost: /, messages: 0) 2026-03-28 06:32:08.416274 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - central_fanout_fd434a019a644a0397f2316527c4b741 (vhost: /, messages: 0) 2026-03-28 06:32:08.416897 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-28 06:32:08.416925 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.416936 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.416951 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.416969 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup_fanout_0cebef8d4eb546edb016a1b4b481dd66 (vhost: /, messages: 0) 2026-03-28 06:32:08.416985 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup_fanout_74af5b10181140b2b460900eebf5b23a (vhost: /, messages: 0) 2026-03-28 06:32:08.417002 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-backup_fanout_b50eea2470ac4c80851904a73ac46936 (vhost: /, messages: 0) 2026-03-28 06:32:08.417016 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-28 06:32:08.417032 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.417296 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.417316 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.417327 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler_fanout_9306dc18757b49f298b07ea1f14f0e1e (vhost: /, messages: 0) 2026-03-28 06:32:08.417344 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler_fanout_b10af18d34ee4c2c95bf6ee19ce8fa22 (vhost: /, messages: 0) 2026-03-28 06:32:08.417381 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-scheduler_fanout_fc50f7cf56554e50b6c75bbaf704f679 (vhost: /, messages: 0) 2026-03-28 06:32:08.417400 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-28 06:32:08.417637 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-28 06:32:08.417655 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.417665 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_0437adb3bec34db2ba5421d04b3e10c8 (vhost: /, messages: 0) 2026-03-28 06:32:08.417679 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-28 06:32:08.418185 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.418227 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_1db837471bad41319f49a8bacc69cc5e (vhost: /, messages: 0) 2026-03-28 06:32:08.418245 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-28 06:32:08.418344 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.418361 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_f4561ebed4354dbf86001495fe21069b (vhost: /, messages: 0) 2026-03-28 06:32:08.418386 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume_fanout_cdbd9930ef0c4b84b14c5f6da98e3f2f (vhost: /, messages: 0) 2026-03-28 06:32:08.418783 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume_fanout_da9f0fb8e39f4a1287910fb703bbc4dd (vhost: /, messages: 0) 2026-03-28 06:32:08.418815 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - cinder-volume_fanout_f2f88205a48b4fa2bdff2b178b7ec32a (vhost: /, messages: 0) 2026-03-28 06:32:08.418833 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-28 06:32:08.418846 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-28 06:32:08.418856 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-28 06:32:08.418873 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-28 06:32:08.418978 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute_fanout_5504570292c94a048f19404c6762e74e (vhost: /, messages: 0) 2026-03-28 06:32:08.418999 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute_fanout_c7f09f95ccd748fd9c5dcc5b88d26c53 (vhost: /, messages: 0) 2026-03-28 06:32:08.419017 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - compute_fanout_e088735fb4874442852caf296ba20717 (vhost: /, messages: 0) 2026-03-28 06:32:08.419034 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-28 06:32:08.419253 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.419279 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.419297 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.419313 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_072f2988774043609bf4a13ddef8077a (vhost: /, messages: 0) 2026-03-28 06:32:08.419478 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_0b36a57600704a1a8a74fc7322d641d6 (vhost: /, messages: 0) 2026-03-28 06:32:08.419495 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_0bde987b74af4dfb94807aad18ec6eb2 (vhost: /, messages: 0) 2026-03-28 06:32:08.419505 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_3d18028ce67a4108a0985499b91c165d (vhost: /, messages: 0) 2026-03-28 06:32:08.419516 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_6624310d05514529bd2eef276ea1542e (vhost: /, messages: 0) 2026-03-28 06:32:08.420209 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - conductor_fanout_895308a974cc4ab7a218e32f9ef4738b (vhost: /, messages: 0) 2026-03-28 06:32:08.420311 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - event.sample (vhost: /, messages: 4) 2026-03-28 06:32:08.420328 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-28 06:32:08.420359 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor.cccfuzkgzlpg (vhost: /, messages: 0) 2026-03-28 06:32:08.420371 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor.ganq4fbg527p (vhost: /, messages: 0) 2026-03-28 06:32:08.420392 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor.s62v6dwdbim6 (vhost: /, messages: 0) 2026-03-28 06:32:08.420404 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_076c7f43d5dd4333a58e20b9f9c9099f (vhost: /, messages: 0) 2026-03-28 06:32:08.420509 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_43dcd62ce37c477daf9652a39695729d (vhost: /, messages: 0) 2026-03-28 06:32:08.420523 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_49f2e6f75bbe43ac90ad05a4d8548b81 (vhost: /, messages: 0) 2026-03-28 06:32:08.420539 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_5adc145f785f4ec9881620927eebbbd0 (vhost: /, messages: 0) 2026-03-28 06:32:08.420550 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_670bef0f741f4ea39b4fa8ee77ca6b2f (vhost: /, messages: 0) 2026-03-28 06:32:08.420561 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_718bab8af8af4e59bb77b1ce42d72b5d (vhost: /, messages: 0) 2026-03-28 06:32:08.420627 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_75ef4d97a9fb4be7bc3d640f5ff56132 (vhost: /, messages: 0) 2026-03-28 06:32:08.420647 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_7cb2437d23d74fb4aa44a41923edb191 (vhost: /, messages: 0) 2026-03-28 06:32:08.420660 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - magnum-conductor_fanout_824b7f8dd2c24b3d8060b8db74408752 (vhost: /, messages: 0) 2026-03-28 06:32:08.420983 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-28 06:32:08.421009 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.421021 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.421034 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.421045 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data_fanout_1bfd0c39804e488ca9cd072c006060d1 (vhost: /, messages: 0) 2026-03-28 06:32:08.421192 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data_fanout_434619e05e2c484ca4d3fc005177f8e5 (vhost: /, messages: 0) 2026-03-28 06:32:08.421209 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-data_fanout_4a0d69aedd754a00b5dacef8eea96822 (vhost: /, messages: 0) 2026-03-28 06:32:08.421222 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-28 06:32:08.421453 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.421472 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.421484 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.421496 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler_fanout_3ebbac62df1f4a4a998255a02880d896 (vhost: /, messages: 0) 2026-03-28 06:32:08.421652 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler_fanout_92d520d5fc664d4eb743c099c8184ea2 (vhost: /, messages: 0) 2026-03-28 06:32:08.421723 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-scheduler_fanout_ee9ee4ac1bd5431da29f77589d22225e (vhost: /, messages: 0) 2026-03-28 06:32:08.421750 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-28 06:32:08.421890 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-28 06:32:08.421908 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-28 06:32:08.421919 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-28 06:32:08.422187 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share_fanout_36f4df54d6784f109349204ab512fd9d (vhost: /, messages: 0) 2026-03-28 06:32:08.422207 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share_fanout_4fdeed84e2e34d0ea66f4fd3e07aa823 (vhost: /, messages: 0) 2026-03-28 06:32:08.422218 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - manila-share_fanout_8ce64b412cbf45fda7e95bd6450ea10d (vhost: /, messages: 0) 2026-03-28 06:32:08.422477 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-28 06:32:08.422496 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-28 06:32:08.422507 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-28 06:32:08.422518 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-28 06:32:08.422764 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-28 06:32:08.422781 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-28 06:32:08.422793 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-28 06:32:08.423208 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-28 06:32:08.423283 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.423296 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.423307 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.423323 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2_fanout_365bded7e5f34626afdc298fb01b3a38 (vhost: /, messages: 0) 2026-03-28 06:32:08.423336 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2_fanout_82f7691040d34498ae04bc9841e71043 (vhost: /, messages: 0) 2026-03-28 06:32:08.423609 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - octavia_provisioning_v2_fanout_8bc06d62192e44dd918956f28085d1cf (vhost: /, messages: 0) 2026-03-28 06:32:08.423630 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-28 06:32:08.423754 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.423769 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.423779 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.423789 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_4183c69c4e954c6ab5379e00c36e909c (vhost: /, messages: 0) 2026-03-28 06:32:08.423799 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_697aef9cbcf04f079e51c49d6d329fa1 (vhost: /, messages: 0) 2026-03-28 06:32:08.424142 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_7dae03309be24c339eef4e4d7d0e3a33 (vhost: /, messages: 0) 2026-03-28 06:32:08.424160 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_bed6eff2f3ea43abbf36dcd010c91881 (vhost: /, messages: 0) 2026-03-28 06:32:08.424170 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_c02a28769f824976b7ecea44c840ec42 (vhost: /, messages: 0) 2026-03-28 06:32:08.424225 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - producer_fanout_e8df376f0cae4f82a669c4d06803b8ec (vhost: /, messages: 0) 2026-03-28 06:32:08.424238 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-28 06:32:08.424252 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.424892 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.424911 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.424922 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_0aa8eadc507b40f7aa9e80464530b3cf (vhost: /, messages: 0) 2026-03-28 06:32:08.424932 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_27e689ad2e4948ef8d894580741f564f (vhost: /, messages: 0) 2026-03-28 06:32:08.424942 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_3becb423084147d598a3994d0f849cd2 (vhost: /, messages: 0) 2026-03-28 06:32:08.424960 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_407198d6e3fb4a8788dfb540fc13bf58 (vhost: /, messages: 0) 2026-03-28 06:32:08.424970 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_8ab1d3f8f5724224beca53caa5b85adc (vhost: /, messages: 0) 2026-03-28 06:32:08.424981 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_9d7ce6de609d40f9bd24e89748d77615 (vhost: /, messages: 0) 2026-03-28 06:32:08.425170 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_a85210ee489648ba9420cf71854a8ccd (vhost: /, messages: 0) 2026-03-28 06:32:08.425188 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_ba53a3a42b7b4ee5bc6ce4734e83865f (vhost: /, messages: 0) 2026-03-28 06:32:08.425198 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-plugin_fanout_ef79799cc7c546aa83b78c5b32756d4d (vhost: /, messages: 0) 2026-03-28 06:32:08.425207 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-28 06:32:08.425217 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.425538 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.425557 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.425567 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_0511c29303ec442f9fee2dece0cd125b (vhost: /, messages: 0) 2026-03-28 06:32:08.425577 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_11ad4645390344d3bcaaca2cc2097790 (vhost: /, messages: 0) 2026-03-28 06:32:08.425587 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_2e69183fb50943f49c5878300a3ab3de (vhost: /, messages: 0) 2026-03-28 06:32:08.425597 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_406660a15cf74b519dd3f00c145d4571 (vhost: /, messages: 0) 2026-03-28 06:32:08.426226 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_407243bf90724fc3a437f40238ec2937 (vhost: /, messages: 0) 2026-03-28 06:32:08.426297 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_53cfc214a36b46f1bf0394dc34f7c5e8 (vhost: /, messages: 0) 2026-03-28 06:32:08.426309 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_5d68ae0ffe1344929e76a88aca97d023 (vhost: /, messages: 0) 2026-03-28 06:32:08.426317 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_621f27bf367d42db8836aad99c41e3d1 (vhost: /, messages: 0) 2026-03-28 06:32:08.426326 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_685705c1dc8f4f109aeb7bb8c830d418 (vhost: /, messages: 0) 2026-03-28 06:32:08.426335 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_68a45afc8f32418ea447456819855b9a (vhost: /, messages: 0) 2026-03-28 06:32:08.426344 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_70f79bb7960543b984f56d4e7f9f7c5a (vhost: /, messages: 0) 2026-03-28 06:32:08.426353 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_70f95f18b28c4717a4174a8cf5dd9fc7 (vhost: /, messages: 0) 2026-03-28 06:32:08.426362 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_85eedc4a9d5d4ef6b02fdf48799e09d7 (vhost: /, messages: 0) 2026-03-28 06:32:08.426439 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_8d212ae798074d42a8358444d968b116 (vhost: /, messages: 0) 2026-03-28 06:32:08.426551 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_93844525e11b47e5982848fe804d013d (vhost: /, messages: 0) 2026-03-28 06:32:08.426639 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_b71eeefa9acd47c0acadb233f2ccfdbe (vhost: /, messages: 0) 2026-03-28 06:32:08.426658 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-reports-plugin_fanout_f54a43a70d9e4249b29de2674e55eb8d (vhost: /, messages: 0) 2026-03-28 06:32:08.426675 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-28 06:32:08.426715 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.426851 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.426889 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.426915 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_083c56d690eb4e0aaee6f77b1e34fe78 (vhost: /, messages: 0) 2026-03-28 06:32:08.426933 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_191b5542a8cf47d2be5e8debe163f3cd (vhost: /, messages: 0) 2026-03-28 06:32:08.427014 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_2dbb9006e2a54c66bf2e49fa0afe86ee (vhost: /, messages: 0) 2026-03-28 06:32:08.427119 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_3a0fb5d39b3b42be827411fc973706d3 (vhost: /, messages: 0) 2026-03-28 06:32:08.427142 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_5154cf2711b046bca0b37abb336a003e (vhost: /, messages: 0) 2026-03-28 06:32:08.427158 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_5c102f8bac1a49bd81f50c601c0b9faa (vhost: /, messages: 0) 2026-03-28 06:32:08.427172 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_720344c335bc42af846b4d4d8cad3862 (vhost: /, messages: 0) 2026-03-28 06:32:08.427185 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_a6d0e8a3cd814a169a4f5bd46783acf5 (vhost: /, messages: 0) 2026-03-28 06:32:08.427878 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - q-server-resource-versions_fanout_cb635bc3742148499dfa8fcc8c100c45 (vhost: /, messages: 0) 2026-03-28 06:32:08.428005 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_13a747b57cee47aba2e5e2fe516ecb7a (vhost: /, messages: 0) 2026-03-28 06:32:08.428025 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_27aa3ac123ff4a428bb418d15e8abb9b (vhost: /, messages: 0) 2026-03-28 06:32:08.428040 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_30e97928ac39412fa6475dfff4160ab9 (vhost: /, messages: 0) 2026-03-28 06:32:08.428054 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_35c9d82b85304d9aba10ef8e463efdc1 (vhost: /, messages: 0) 2026-03-28 06:32:08.428068 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_39b31d63b4104122978504b9b632fc92 (vhost: /, messages: 0) 2026-03-28 06:32:08.428083 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_3ae6838221a54b68a1569bc63395c1e3 (vhost: /, messages: 0) 2026-03-28 06:32:08.428097 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_3b6872e7cac1428793801d2cb74569ca (vhost: /, messages: 0) 2026-03-28 06:32:08.428112 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_3d1b421b4d294ed99eb3a9bdf728e034 (vhost: /, messages: 0) 2026-03-28 06:32:08.428136 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_40495a52cc0a4a7c88bf23798123271f (vhost: /, messages: 0) 2026-03-28 06:32:08.428151 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_48700a88310140f7906bbb8462c82c86 (vhost: /, messages: 0) 2026-03-28 06:32:08.428243 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_638cdfda2aac43bbb7f3ce26485e0305 (vhost: /, messages: 0) 2026-03-28 06:32:08.428259 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_6649721200ad46c593469f306b085e9c (vhost: /, messages: 0) 2026-03-28 06:32:08.428360 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_6b46ce1f819943c683421d8d591ad885 (vhost: /, messages: 0) 2026-03-28 06:32:08.428380 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_7162161055bd4c159712b632e98101ad (vhost: /, messages: 0) 2026-03-28 06:32:08.428395 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_9a2b10ceeec2416698bc239622f9d013 (vhost: /, messages: 0) 2026-03-28 06:32:08.428410 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_aa92e93225a24bc09051fc77adbb84fe (vhost: /, messages: 0) 2026-03-28 06:32:08.428430 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_aeb8c1b17b9041f786fbe8e7629d3fcb (vhost: /, messages: 0) 2026-03-28 06:32:08.428445 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_d09fafa683be4725add3894dd63872be (vhost: /, messages: 0) 2026-03-28 06:32:08.428460 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - reply_d89f0d6cc6fa4120a6dda7fc054de78d (vhost: /, messages: 0) 2026-03-28 06:32:08.428476 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-28 06:32:08.428491 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.428666 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.428708 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.428731 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler_fanout_3239776560194d0b8281b71af7ba1b5e (vhost: /, messages: 0) 2026-03-28 06:32:08.428747 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler_fanout_4ddafa14783c40b8a7776e4b1b1594c0 (vhost: /, messages: 0) 2026-03-28 06:32:08.428788 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler_fanout_4e6a5a6a305f43edbb1c98918c081b31 (vhost: /, messages: 0) 2026-03-28 06:32:08.428803 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler_fanout_5720bfec3dda4368b87872611b8e1418 (vhost: /, messages: 0) 2026-03-28 06:32:08.428817 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - scheduler_fanout_84ce545910464d9190d5da51f85ab39d (vhost: /, messages: 0) 2026-03-28 06:32:08.429124 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-28 06:32:08.429295 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-28 06:32:08.429315 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-28 06:32:08.429330 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-28 06:32:08.429344 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_0c22fc6ae2ee4a15b7156124c43dd14d (vhost: /, messages: 0) 2026-03-28 06:32:08.429367 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_7cc2c2401ed3408cbef40e77da3e8bd7 (vhost: /, messages: 0) 2026-03-28 06:32:08.429383 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_9ad1c9d284df42f1b83ae64bccfacb35 (vhost: /, messages: 0) 2026-03-28 06:32:08.429398 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_aa8e6188c2c74480b9834c1a8926279b (vhost: /, messages: 0) 2026-03-28 06:32:08.429412 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_bc492722468c424597b80e691a8730da (vhost: /, messages: 0) 2026-03-28 06:32:08.429427 | orchestrator | 2026-03-28 06:32:08 | INFO  |  - worker_fanout_e148afd36adb41ad9e71ce1a8696f88d (vhost: /, messages: 0) 2026-03-28 06:32:08.753017 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-28 06:32:10.757612 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-28 06:32:10.757795 | orchestrator | [--no-close-connections] [--quorum] 2026-03-28 06:32:10.757818 | orchestrator | [--vhost VHOST] 2026-03-28 06:32:10.757831 | orchestrator | [{list,delete,prepare,check}] 2026-03-28 06:32:10.757844 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-28 06:32:10.757858 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-28 06:32:11.476240 | orchestrator | ERROR 2026-03-28 06:32:11.476453 | orchestrator | { 2026-03-28 06:32:11.476489 | orchestrator | "delta": "2:06:57.721137", 2026-03-28 06:32:11.476512 | orchestrator | "end": "2026-03-28 06:32:11.064820", 2026-03-28 06:32:11.476534 | orchestrator | "msg": "non-zero return code", 2026-03-28 06:32:11.476554 | orchestrator | "rc": 2, 2026-03-28 06:32:11.476574 | orchestrator | "start": "2026-03-28 04:25:13.343683" 2026-03-28 06:32:11.476592 | orchestrator | } failure 2026-03-28 06:32:11.807722 | 2026-03-28 06:32:11.807946 | PLAY RECAP 2026-03-28 06:32:11.808114 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-28 06:32:11.808186 | 2026-03-28 06:32:12.058446 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-28 06:32:12.061374 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 06:32:12.799718 | 2026-03-28 06:32:12.799887 | PLAY [Post output play] 2026-03-28 06:32:12.817402 | 2026-03-28 06:32:12.817536 | LOOP [stage-output : Register sources] 2026-03-28 06:32:12.889924 | 2026-03-28 06:32:12.890318 | TASK [stage-output : Check sudo] 2026-03-28 06:32:13.785034 | orchestrator | sudo: a password is required 2026-03-28 06:32:13.927520 | orchestrator | ok: Runtime: 0:00:00.016416 2026-03-28 06:32:13.935276 | 2026-03-28 06:32:13.935402 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-28 06:32:13.969409 | 2026-03-28 06:32:13.969640 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-28 06:32:14.049693 | orchestrator | ok 2026-03-28 06:32:14.065027 | 2026-03-28 06:32:14.065282 | LOOP [stage-output : Ensure target folders exist] 2026-03-28 06:32:14.515175 | orchestrator | ok: "docs" 2026-03-28 06:32:14.515476 | 2026-03-28 06:32:14.762746 | orchestrator | ok: "artifacts" 2026-03-28 06:32:15.008159 | orchestrator | ok: "logs" 2026-03-28 06:32:15.032259 | 2026-03-28 06:32:15.032448 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-28 06:32:15.071671 | 2026-03-28 06:32:15.071977 | TASK [stage-output : Make all log files readable] 2026-03-28 06:32:15.458910 | orchestrator | ok 2026-03-28 06:32:15.469747 | 2026-03-28 06:32:15.469993 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-28 06:32:15.506323 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:15.525211 | 2026-03-28 06:32:15.525380 | TASK [stage-output : Discover log files for compression] 2026-03-28 06:32:15.550497 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:15.564937 | 2026-03-28 06:32:15.565111 | LOOP [stage-output : Archive everything from logs] 2026-03-28 06:32:15.610322 | 2026-03-28 06:32:15.610488 | PLAY [Post cleanup play] 2026-03-28 06:32:15.618786 | 2026-03-28 06:32:15.618910 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 06:32:15.680609 | orchestrator | ok 2026-03-28 06:32:15.692541 | 2026-03-28 06:32:15.692663 | TASK [Set cloud fact (local deployment)] 2026-03-28 06:32:15.727281 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:15.744368 | 2026-03-28 06:32:15.744513 | TASK [Clean the cloud environment] 2026-03-28 06:32:16.336553 | orchestrator | 2026-03-28 06:32:16 - clean up servers 2026-03-28 06:32:17.142957 | orchestrator | 2026-03-28 06:32:17 - testbed-manager 2026-03-28 06:32:17.225698 | orchestrator | 2026-03-28 06:32:17 - testbed-node-4 2026-03-28 06:32:17.314307 | orchestrator | 2026-03-28 06:32:17 - testbed-node-1 2026-03-28 06:32:17.406461 | orchestrator | 2026-03-28 06:32:17 - testbed-node-5 2026-03-28 06:32:17.495691 | orchestrator | 2026-03-28 06:32:17 - testbed-node-2 2026-03-28 06:32:17.588167 | orchestrator | 2026-03-28 06:32:17 - testbed-node-0 2026-03-28 06:32:17.676508 | orchestrator | 2026-03-28 06:32:17 - testbed-node-3 2026-03-28 06:32:17.776909 | orchestrator | 2026-03-28 06:32:17 - clean up keypairs 2026-03-28 06:32:17.795257 | orchestrator | 2026-03-28 06:32:17 - testbed 2026-03-28 06:32:17.818749 | orchestrator | 2026-03-28 06:32:17 - wait for servers to be gone 2026-03-28 06:32:28.851139 | orchestrator | 2026-03-28 06:32:28 - clean up ports 2026-03-28 06:32:29.041333 | orchestrator | 2026-03-28 06:32:29 - 0a238f8c-7480-4c12-bf2c-a0ce3768a613 2026-03-28 06:32:29.305193 | orchestrator | 2026-03-28 06:32:29 - 2df743ef-ab9f-4b61-a8d9-c81bd7368ca5 2026-03-28 06:32:29.583644 | orchestrator | 2026-03-28 06:32:29 - 604e9214-e0f3-4891-a5d9-c0556b60dd65 2026-03-28 06:32:29.806472 | orchestrator | 2026-03-28 06:32:29 - 6ae59ec6-bdbf-4d3d-9ba9-823adc544c8c 2026-03-28 06:32:30.021107 | orchestrator | 2026-03-28 06:32:30 - 8f52a1c0-2b0e-4c65-aafa-b47c72baa7dd 2026-03-28 06:32:30.246702 | orchestrator | 2026-03-28 06:32:30 - 9525412f-cd6f-442e-861f-93df5982361d 2026-03-28 06:32:30.772806 | orchestrator | 2026-03-28 06:32:30 - e5efb26f-aac4-459a-b33a-5452f08a5fc1 2026-03-28 06:32:30.991802 | orchestrator | 2026-03-28 06:32:30 - clean up volumes 2026-03-28 06:32:31.130852 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-2-node-base 2026-03-28 06:32:31.172780 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-5-node-base 2026-03-28 06:32:31.214665 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-0-node-base 2026-03-28 06:32:31.261623 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-4-node-base 2026-03-28 06:32:31.303731 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-1-node-base 2026-03-28 06:32:31.346851 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-3-node-base 2026-03-28 06:32:31.387644 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-manager-base 2026-03-28 06:32:31.429504 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-6-node-3 2026-03-28 06:32:31.468855 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-4-node-4 2026-03-28 06:32:31.512628 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-3-node-3 2026-03-28 06:32:31.555156 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-1-node-4 2026-03-28 06:32:31.598456 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-5-node-5 2026-03-28 06:32:31.640550 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-8-node-5 2026-03-28 06:32:31.682731 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-7-node-4 2026-03-28 06:32:31.722139 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-0-node-3 2026-03-28 06:32:31.762501 | orchestrator | 2026-03-28 06:32:31 - testbed-volume-2-node-5 2026-03-28 06:32:31.800242 | orchestrator | 2026-03-28 06:32:31 - disconnect routers 2026-03-28 06:32:31.915597 | orchestrator | 2026-03-28 06:32:31 - testbed 2026-03-28 06:32:32.907827 | orchestrator | 2026-03-28 06:32:32 - clean up subnets 2026-03-28 06:32:32.980221 | orchestrator | 2026-03-28 06:32:32 - subnet-testbed-management 2026-03-28 06:32:33.187405 | orchestrator | 2026-03-28 06:32:33 - clean up networks 2026-03-28 06:32:33.322792 | orchestrator | 2026-03-28 06:32:33 - net-testbed-management 2026-03-28 06:32:33.641527 | orchestrator | 2026-03-28 06:32:33 - clean up security groups 2026-03-28 06:32:33.684649 | orchestrator | 2026-03-28 06:32:33 - testbed-management 2026-03-28 06:32:33.829728 | orchestrator | 2026-03-28 06:32:33 - testbed-node 2026-03-28 06:32:33.961482 | orchestrator | 2026-03-28 06:32:33 - clean up floating ips 2026-03-28 06:32:33.997818 | orchestrator | 2026-03-28 06:32:33 - 81.163.193.11 2026-03-28 06:32:34.430348 | orchestrator | 2026-03-28 06:32:34 - clean up routers 2026-03-28 06:32:34.570326 | orchestrator | 2026-03-28 06:32:34 - testbed 2026-03-28 06:32:36.301791 | orchestrator | ok: Runtime: 0:00:19.936166 2026-03-28 06:32:36.306141 | 2026-03-28 06:32:36.306452 | PLAY RECAP 2026-03-28 06:32:36.306570 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-28 06:32:36.306621 | 2026-03-28 06:32:36.445307 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 06:32:36.446578 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 06:32:37.188888 | 2026-03-28 06:32:37.189076 | PLAY [Cleanup play] 2026-03-28 06:32:37.205336 | 2026-03-28 06:32:37.205488 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 06:32:37.256837 | orchestrator | ok 2026-03-28 06:32:37.263957 | 2026-03-28 06:32:37.264123 | TASK [Set cloud fact (local deployment)] 2026-03-28 06:32:37.299305 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:37.314872 | 2026-03-28 06:32:37.315042 | TASK [Clean the cloud environment] 2026-03-28 06:32:38.471255 | orchestrator | 2026-03-28 06:32:38 - clean up servers 2026-03-28 06:32:38.966851 | orchestrator | 2026-03-28 06:32:38 - clean up keypairs 2026-03-28 06:32:38.986868 | orchestrator | 2026-03-28 06:32:38 - wait for servers to be gone 2026-03-28 06:32:39.029968 | orchestrator | 2026-03-28 06:32:39 - clean up ports 2026-03-28 06:32:39.109819 | orchestrator | 2026-03-28 06:32:39 - clean up volumes 2026-03-28 06:32:39.187339 | orchestrator | 2026-03-28 06:32:39 - disconnect routers 2026-03-28 06:32:39.213317 | orchestrator | 2026-03-28 06:32:39 - clean up subnets 2026-03-28 06:32:39.236312 | orchestrator | 2026-03-28 06:32:39 - clean up networks 2026-03-28 06:32:39.408483 | orchestrator | 2026-03-28 06:32:39 - clean up security groups 2026-03-28 06:32:39.442066 | orchestrator | 2026-03-28 06:32:39 - clean up floating ips 2026-03-28 06:32:39.488250 | orchestrator | 2026-03-28 06:32:39 - clean up routers 2026-03-28 06:32:39.853478 | orchestrator | ok: Runtime: 0:00:01.413078 2026-03-28 06:32:39.858137 | 2026-03-28 06:32:39.858296 | PLAY RECAP 2026-03-28 06:32:39.858414 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-28 06:32:39.858474 | 2026-03-28 06:32:39.997841 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 06:32:40.000400 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 06:32:40.773632 | 2026-03-28 06:32:40.773790 | PLAY [Base post-fetch] 2026-03-28 06:32:40.790042 | 2026-03-28 06:32:40.790185 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-28 06:32:40.846695 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:40.861380 | 2026-03-28 06:32:40.861598 | TASK [fetch-output : Set log path for single node] 2026-03-28 06:32:40.909974 | orchestrator | ok 2026-03-28 06:32:40.921392 | 2026-03-28 06:32:40.921627 | LOOP [fetch-output : Ensure local output dirs] 2026-03-28 06:32:41.442688 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/logs" 2026-03-28 06:32:41.721624 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/artifacts" 2026-03-28 06:32:41.994746 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8732b28726ea4e9386aa58ce2948e02e/work/docs" 2026-03-28 06:32:42.011113 | 2026-03-28 06:32:42.011325 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-28 06:32:42.960396 | orchestrator | changed: .d..t...... ./ 2026-03-28 06:32:42.960742 | orchestrator | changed: All items complete 2026-03-28 06:32:42.960798 | 2026-03-28 06:32:43.663061 | orchestrator | changed: .d..t...... ./ 2026-03-28 06:32:44.393615 | orchestrator | changed: .d..t...... ./ 2026-03-28 06:32:44.409815 | 2026-03-28 06:32:44.409933 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-28 06:32:44.444921 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:44.450430 | orchestrator | skipping: Conditional result was False 2026-03-28 06:32:44.462683 | 2026-03-28 06:32:44.462762 | PLAY RECAP 2026-03-28 06:32:44.462814 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-28 06:32:44.462862 | 2026-03-28 06:32:44.583882 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 06:32:44.584935 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 06:32:45.294366 | 2026-03-28 06:32:45.294523 | PLAY [Base post] 2026-03-28 06:32:45.308651 | 2026-03-28 06:32:45.308776 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-28 06:32:46.281865 | orchestrator | changed 2026-03-28 06:32:46.292196 | 2026-03-28 06:32:46.292336 | PLAY RECAP 2026-03-28 06:32:46.292415 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-28 06:32:46.292490 | 2026-03-28 06:32:46.412372 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 06:32:46.414806 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-28 06:32:47.205558 | 2026-03-28 06:32:47.205736 | PLAY [Base post-logs] 2026-03-28 06:32:47.216538 | 2026-03-28 06:32:47.216673 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-28 06:32:47.668432 | localhost | changed 2026-03-28 06:32:47.678417 | 2026-03-28 06:32:47.678567 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-28 06:32:47.716939 | localhost | ok 2026-03-28 06:32:47.724732 | 2026-03-28 06:32:47.724901 | TASK [Set zuul-log-path fact] 2026-03-28 06:32:47.742311 | localhost | ok 2026-03-28 06:32:47.755798 | 2026-03-28 06:32:47.755936 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 06:32:47.783251 | localhost | ok 2026-03-28 06:32:47.790248 | 2026-03-28 06:32:47.790422 | TASK [upload-logs : Create log directories] 2026-03-28 06:32:48.280296 | localhost | changed 2026-03-28 06:32:48.285229 | 2026-03-28 06:32:48.285394 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-28 06:32:48.778709 | localhost -> localhost | ok: Runtime: 0:00:00.006969 2026-03-28 06:32:48.788786 | 2026-03-28 06:32:48.788973 | TASK [upload-logs : Upload logs to log server] 2026-03-28 06:32:49.336783 | localhost | Output suppressed because no_log was given 2026-03-28 06:32:49.338787 | 2026-03-28 06:32:49.338915 | LOOP [upload-logs : Compress console log and json output] 2026-03-28 06:32:49.392559 | localhost | skipping: Conditional result was False 2026-03-28 06:32:49.397592 | localhost | skipping: Conditional result was False 2026-03-28 06:32:49.411261 | 2026-03-28 06:32:49.411482 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-28 06:32:49.471418 | localhost | skipping: Conditional result was False 2026-03-28 06:32:49.471989 | 2026-03-28 06:32:49.474287 | localhost | skipping: Conditional result was False 2026-03-28 06:32:49.488793 | 2026-03-28 06:32:49.488980 | LOOP [upload-logs : Upload console log and json output]